Learning and relearning in Boltzmann machines
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Efficient learning in Boltzmann machines using linear response theory
Neural Computation
A revolution: belief propagation in graphs with cycles
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
An Introduction to Variational Methods for Graphical Models
Machine Learning
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
Variational Approximations between Mean Field Theory and the Junction Tree Algorithm
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Belief Optimization for Binary Networks: A Stable Alternative to Loopy Belief Propagation
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Correctness of Local Probability Propagation in Graphical Models with Loops
Neural Computation
Loopy belief propagation for approximate inference: an empirical study
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Turbo decoding as an instance of Pearl's “belief propagation” algorithm
IEEE Journal on Selected Areas in Communications
Linear response algorithms for approximate inference in graphical models
Neural Computation
On the uniqueness of loopy belief propagation fixed points
Neural Computation
Expectation Consistent Approximate Inference
The Journal of Machine Learning Research
Visual Recognition and Inference Using Dynamic Overcomplete Sparse Learning
Neural Computation
Approximate learning algorithm in boltzmann machines
Neural Computation
Message quantization in belief propagation: structural results in the low-rate regime
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
Boltzmann machine learning with the latent maximum entropy principle
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Hi-index | 0.01 |
Inference in Boltzmann machines is NP-hard in general. As a result approximations are often necessary. We discuss first order mean field and second order Onsager truncations of the Plefka expansion of the Gibbs free energy. The Bethe free energy is introduced and rewritten as a Gibbs free energy. From there a convergent belief optimization algorithm is derived to minimize the Bethe free energy. An analytic expression for the linear response estimate of the covariances is found which is exact on Boltzmann trees. Finally, a number of theorems is proven concerning the Plefka expansion, relating the first order mean field and the second order Onsager approximation to the Bethe approximation. Experiments compare mean field approximation, Onsager approximation, belief propagation and belief optimization.