IEEE Transactions on Image Processing
Empirical analysis of an on-line adaptive system using a mixture of Bayesian networks
Information Sciences: an International Journal
Error bounds between marginal probabilities and beliefs of loopy belief propagation algorithm
MICAI'06 Proceedings of the 5th Mexican international conference on Artificial Intelligence
Applications of gibbs measure theory to loopy belief propagation algorithm
MICAI'06 Proceedings of the 5th Mexican international conference on Artificial Intelligence
Hi-index | 0.00 |
The belief propagation (BP) algorithm is a tool with which one can calculate beliefs, marginal probabilities, of probabilistic networks without loops (e.g., Bayesian networks) in a time proportional to the number of nodes. For networks with loops, it may not converge and, even if it converges, beliefs may not be equal to exact marginal probabilities although its application is known to give remarkably good results such as in the coding theory. Tatikonda and Jordan show a theoretical result on the convergence of the algorithm for probabilistic networks with loops in terms of the theory of Markov random fields on trees and give a sufficient condition of the convergence of the algorithm. In this paper, we discuss the "impatient" update rule as well as the "lazy" update rule discussed in Tatikonda and Jordan. In the viewpoint of the theory of Markov random fields, it is shown that the rule for updating both gives essentially the same results and the impatient update rule is expected to converge faster than the lazy one. Numerical experiments are also given.