Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Approximate inference for medical diagnosis
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Expectation Propagation for approximate Bayesian inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Loop Corrections for Approximate Inference on Factor Graphs
The Journal of Machine Learning Research
Truncating the Loop Series Expansion for Belief Propagation
The Journal of Machine Learning Research
Variational probabilistic inference and the QMR-DT network
Journal of Artificial Intelligence Research
Multiplicative factorization of noisy-max
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Iterative join-graph propagation
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Approximate inference and constrained optimization
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Factor graphs and the sum-product algorithm
IEEE Transactions on Information Theory
Constructing free-energy approximations and generalized belief propagation algorithms
IEEE Transactions on Information Theory
Loop Corrections for Approximate Inference on Factor Graphs
The Journal of Machine Learning Research
Loop Calculus for satisfiability
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
libDAI: A Free and Open Source C++ Library for Discrete Approximate Inference in Graphical Models
The Journal of Machine Learning Research
Decomposition and Approximation of Loopy Bayesian Networks
Fundamenta Informaticae
Hi-index | 0.00 |
We propose a method to improve approximate inference methods by correcting for the influence of loops in the graphical model. The method is a generalization and alternative implementation of a recent idea from Montanari and Rizzo (2005). It is applicable to arbitrary factor graphs, provided that the size of the Markov blankets is not too large. It consists of two steps: (i) an approximate inference method, for example, belief propagation, is used to approximate cavity distributions for each variable (i.e., probability distributions on the Markov blanket of a variable for a modified graphical model in which the factors involving that variable have been removed); (ii) all cavity distributions are improved by a message-passing algorithm that cancels out approximation errors by imposing certain consistency constraints. This loop correction (LC) method usually gives significantly better results than the original, uncorrected, approximate inference algorithm that is used to estimate the effect of loops. Indeed, we often observe that the loop-corrected error is approximately the square of the error of the uncorrected approximate inference method. In this article, we compare different variants of the loop correction method with other approximate inference methods on a variety of graphical models, including "real world" networks, and conclude that the LC method generally obtains the most accurate results.