A model for reasoning about persistence and causation
Computational Intelligence
A computational scheme for reasoning in dynamic probabilistic networks
UAI '92 Proceedings of the eighth conference on Uncertainty in Artificial Intelligence
Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
The EM algorithm for graphical association models with missing data
Computational Statistics & Data Analysis - Special issue dedicated to Toma´sˇ Havra´nek
Local learning in probabilistic networks with hidden variables
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
The BATmobile: towards a Bayesian automated taxi
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
Learning agents for uncertain environments (extended abstract)
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Discovery of Unknown Causes from Unexpected Co-occurrence of Inferred Known Causes
DS '98 Proceedings of the First International Conference on Discovery Science
Compiling Comp Ling: practical weighted dynamic programming and the Dyna language
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
COMPARISON OF TWO TYPES OF EVENT BAYESIAN NETWORKS: A CASE STUDY
Applied Artificial Intelligence
Incremental estimation of discrete hidden Markov models based on a new backward procedure
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Artificial Intelligence in Medicine
Hi-index | 0.00 |
Dynamic probabilistic networks (DPNs) are a useful tool for modeling complex stochastic processes. The simplest inference task in DPNs is monitoring- that is, computing a posterior distribution for the state variables at each time step given all observations up to that time. Recursive, constant-space algorithms are well-known for monitoring in DPNs and other models. This paper is concerned with hindsight -that is, computing a posterior distribution given both past and future observations. Hindsight is an essential subtask of learning DPN models from data. Existing algorithms for hindsight in DPNs use O(SN) space and time, where N is the total length of the observation sequence and S is the state space size for each time step. They are therefore impractical for hindsight in complex models with long observation sequences. This paper presents an O(S log N) space, O(SN log N) time hindsight algorithm. We demonstrates the effectiveness of the algorithm in two real-world DPN learning problems. We also discuss the possibility of an O(S)-space, O(SiV)-time algorithm.