Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
On the effective implementation of the iterative proportional fitting procedure
Computational Statistics & Data Analysis - Special issue dedicated to Toma´sˇ Havra´nek
Minimax and Hamiltonian dynamics of excitatory-inhibitory networks
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
An introduction to variational methods for graphical models
Learning in graphical models
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
A Tractable Inference Algorithm for Diagnosing Multiple Diseases
UAI '89 Proceedings of the Fifth Annual Conference on Uncertainty in Artificial Intelligence
On the uniqueness of loopy belief propagation fixed points
Neural Computation
Loopy Belief Propagation: Convergence and Effects of Message Errors
The Journal of Machine Learning Research
Estimation and Marginalization Using the Kikuchi Approximation Methods
Neural Computation
Variational probabilistic inference and the QMR-DT network
Journal of Artificial Intelligence Research
Loopy belief propagation for approximate inference: an empirical study
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Iterative join-graph propagation
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Loopy belief propagation and Gibbs measures
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
A new class of upper bounds on the log partition function
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Approximate inference and constrained optimization
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Factor graphs and the sum-product algorithm
IEEE Transactions on Information Theory
Constructing free-energy approximations and generalized belief propagation algorithms
IEEE Transactions on Information Theory
Turbo decoding as an instance of Pearl's “belief propagation” algorithm
IEEE Journal on Selected Areas in Communications
Graphical Models, Exponential Families, and Variational Inference
Foundations and Trends® in Machine Learning
Convergent message passing algorithms: a unifying view
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Convexifying the Bethe free energy
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Norm-product belief propagation: primal-dual message-passing for approximate inference
IEEE Transactions on Information Theory
Structured Learning and Prediction in Computer Vision
Foundations and Trends® in Computer Graphics and Vision
Hi-index | 0.06 |
Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms have been shown to correspond to extrema of the Bethe and Kikuchi free energy, both of which are approximations of the exact Helmholtz free energy. However, belief propagation does not always converge, which motivates approaches that explicitly minimize the Kikuchi/Bethe free energy, such as CCCP and UPS. Here we describe a class of algorithms that solves this typically non-convex constrained minimization problem through a sequence of convex constrained minimizations of upper bounds on the Kikuchi free energy. Intuitively one would expect tighter bounds to lead to faster algorithms, which is indeed convincingly demonstrated in our simulations. Several ideas are applied to obtain tight convex bounds that yield dramatic speed-ups over CCCP.