Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Fusion and propagation with multiple observations in belief networks
Artificial Intelligence
Artificial Intelligence - special issue on computational tradeoffs under bounded resources
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
Exploiting causal independence in Bayesian network inference
Journal of Artificial Intelligence Research
Symbolic probabilistic inference in belief networks
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 1
Bucket elimination: a unifying framework for probabilistic inference
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
Topological parameters for time-space tradeoff
UAI'96 Proceedings of the Twelfth international conference on Uncertainty in artificial intelligence
Operating with potentials of discrete variables
International Journal of Approximate Reasoning
Improvements to message computation in lazy propagation
International Journal of Approximate Reasoning
A search problem in complex diagnostic Bayesian networks
Knowledge-Based Systems
Hi-index | 0.00 |
We have recently introduced an any-space algorithm for exact inference in Bayesian networks, called Recursive Conditioning, RC, which allows one to trade space with time at increments of X-bytes, where X is the number of bytes needed to cache a floating point number. In this paper, we present three key extensions of RC. First, we modify the algorithm so it applies to more general factorizations of probability distributions, including (but not limited to) Bayesian network factorizations. Second, we present a forgetting mechanism which reduces the space requirements of RC considerably and then compare such requirements with those of variable elimination on a number of realistic networks, showing orders of magnitude improvements in certain cases. Third, we present a version of RC for computing maximum a posteriori hypotheses (MAP), which turns out to be the first MAP algorithm allowing a smooth time-space tradeoff. A key advantage of the presented MAP algorithm is that it does not have to start from scratch each time a new query is presented, but can reuse some of its computations across multiple queries, leading to significant savings in certain cases.