Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Computing marginals for arbitrary subsets from marginal representation in Markov trees
Artificial Intelligence
Introduction to Bayesian Networks
Introduction to Bayesian Networks
Constructing the Dependency Structure of a Multiagent Probabilistic Network
IEEE Transactions on Knowledge and Data Engineering
Algorithms for acyclic database schemes
VLDB '81 Proceedings of the seventh international conference on Very Large Data Bases - Volume 7
Exploiting causal independence in Bayesian network inference
Journal of Artificial Intelligence Research
Lazy propagation in junction trees
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
There exists a number of problems in the traditional method for belief updating. First, it is generally believed that the junction tree propagation (JTP) method cannot compute p(X|e) when X is not contained in a node of the junction tree. Secondly, the local propagation procedure has to be applied whenever new evidence is observed. Many researchers have attempted to solve the first problem. Contrary to common belief, in this paper we show that one can in fact easily compute p(X\e) by the standard JTP method for any X. We also show that it is not necessary to repeat the local propagation procedure for processing new evidence. More importantly, perhaps, we suggest a more efficient method for belief updating. Our method requires to compute the marginals of the individual nodes in the junction tree only once.