Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
A revolution: belief propagation in graphs with cycles
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Proceedings of the 1998 conference on Advances in neural information processing systems II
Learning to estimate scenes from images
Proceedings of the 1998 conference on Advances in neural information processing systems II
Correctness of Local Probability Propagation in Graphical Models with Loops
Neural Computation
Loopy belief propagation for approximate inference: an empirical study
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Turbo decoding as an instance of Pearl's “belief propagation” algorithm
IEEE Journal on Selected Areas in Communications
Hi-index | 0.00 |
We present a novel inference algorithm for arbitrary, binary, undirected graphs. Unlike loopy belief propagation, which iterates fixed point equations, we directly descend on the Bethe free energy. The algorithm consists of two phases, first we update the pairwise probabilities, given the marginal probabilities at each unit, using an analytic expression. Next, we update the marginal probabilities, by following the negative gradient of the Bethe free energy. Both steps are guaranteed to decrease the Bethe free energy, and since it is lower bounded, the algorithm is guaranteed to converge to a local minimum. We also show that the Bethe free energy is equal to the TAP free energy up to second order in the weights. In experiments we confirm that when belief propagation converges it usually finds identical solutions as our belief optimization method. The stable nature of belief optimization makes it ideally suited for learning graphical models from data.