Algebraic geometrical methods for hierarchical learning machines
Neural Networks
Information geometry for turbo decoding
Systems and Computers in Japan
Correctness of Local Probability Propagation in Graphical Models with Loops
Neural Computation
Stochastic Complexities of Gaussian Mixtures in Variational Bayesian Approximation
The Journal of Machine Learning Research
Loopy belief propagation for approximate inference: an empirical study
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Embedded trees: estimation of Gaussian Processes on graphs with cycles
IEEE Transactions on Signal Processing
Good error-correcting codes based on very sparse matrices
IEEE Transactions on Information Theory
Information geometry of turbo and low-density parity-check codes
IEEE Transactions on Information Theory
Turbo decoding as an instance of Pearl's “belief propagation” algorithm
IEEE Journal on Selected Areas in Communications
Properties of Bethe free energies and message passing in Gaussian models
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
This paper considers the loopy belief propagation (LBP) algorithm applied to Gaussian graphical models. It is known for Gaussian belief propagation that, if LBP converges, LBP computes the exact posterior means but incorrect variances. In this paper, we analytically derive the posterior variances for some special structured graphs and clarify the accuracy of LBP. For the graphs of a single cycle, we derive a rigorous solution for the posterior variances and thereby find the quantity that determines the accuracy of LBP. Based on this result, we state a necessary condition for LBP convergence. The quantity above also plays an important role in graphs of a single cycle with arbitrary trees. For arbitrary topological graphs, we consider the situation where correlations between any pair of nodes are comparatively small and show analytically the principal values that determine the accuracy of LBP.