Optimal nonmyopic value of information in graphical models: efficient algorithms and theoretical limits

  • Authors:
  • Andreas Krause;Carlos Guestrin

  • Affiliations:
  • Carnegie Mellon University;Carnegie Mellon University

  • Venue:
  • IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many real-world decision making tasks require us to choose among several expensive observations. In a sensor network, for example, it is important to select the subset of sensors that is expected to provide the strongest reduction in uncertainty. It has been general practice to use heuristic-guided procedures for selecting observations. In this paper, we present the first efficient optimal algorithms for selecting observations for a class of graphical models containing Hidden Markov Models (HMMs). We provide results for both selecting the optimal subset of observations, and for obtaining an optimal conditional observation plan. For both problems, we present algorithms for the filtering case, where only observations made in the past are taken into account, and the smoothing case, where all observations are utilized. Furthermore we prove a surprising result: In most graphical models tasks, if one designs an efficient algorithm for chain graphs, such as HMMs, this procedure can be generalized to polytrees. We prove that the value of information problem is NPPP-hard even for discrete polytrees. It also follows from our results that even computing conditional entropies, which are widely used to measure value of information, is a #P-complete problem on polytrees. Finally, we demonstrate the effectiveness of our approach on several real-world datasets.