Space-efficient inference in dynamic probabilistic networks

  • Authors:
  • John Binder;Kevin Murphy;Stuart Russell

  • Affiliations:
  • Computer Science Division, University of California, Berkeley, CA;Computer Science Division, University of California, Berkeley, CA;Computer Science Division, University of California, Berkeley, CA

  • Venue:
  • IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Dynamic probabilistic networks (DPNs) are a useful tool for modeling complex stochastic processes. The simplest inference task in DPNs is monitoring- that is, computing a posterior distribution for the state variables at each time step given all observations up to that time. Recursive, constant-space algorithms are well-known for monitoring in DPNs and other models. This paper is concerned with hindsight -that is, computing a posterior distribution given both past and future observations. Hindsight is an essential subtask of learning DPN models from data. Existing algorithms for hindsight in DPNs use O(SN) space and time, where N is the total length of the observation sequence and S is the state space size for each time step. They are therefore impractical for hindsight in complex models with long observation sequences. This paper presents an O(S log N) space, O(SN log N) time hindsight algorithm. We demonstrates the effectiveness of the algorithm in two real-world DPN learning problems. We also discuss the possibility of an O(S)-space, O(SiV)-time algorithm.