Solving crossword puzzles as probabilistic constraint satisfaction
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
International Journal of Computer Vision - Special issue on statistical and computational theories of vision: modeling, learning, sampling and computing, Part I
A probabilistic approach to solving crossword puzzles
Artificial Intelligence - Chips challenging champions: games, computers and Artificial Intelligence
Deterministic Generative Models for Fast Feature Discovery
Data Mining and Knowledge Discovery
Nonparametric belief propagation for self-calibration in sensor networks
Proceedings of the 3rd international symposium on Information processing in sensor networks
Correctness of Local Probability Propagation in Graphical Models with Loops
Neural Computation
Computational studies of human motion: part 1, tracking and motion synthesis
Foundations and Trends® in Computer Graphics and Vision
Dynamically constructed Bayes nets for multi-domain sketch understanding
ACM SIGGRAPH 2006 Courses
Towards 3-D scene reconstruction from broadcast video
Image Communication
Dynamically constructed Bayes nets for multi-domain sketch understanding
ACM SIGGRAPH 2007 courses
On the Minima of Bethe Free Energy in Gaussian Distributions
ICAISC '08 Proceedings of the 9th international conference on Artificial Intelligence and Soft Computing
Learning static object segmentation from motion segmentation
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Dynamically constructed Bayes nets for multi-domain sketch understanding
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Good error correcting output codes for adaptive multiclass learning
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
A single-frame super-resolution innovative approach
MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
A coarse-and-fine Bayesian belief propagation for correspondence problems in computer vision
MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
Theoretical analysis of accuracy of Gaussian belief propagation
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Comparing improved versions of 'K-means' and 'subtractive' clustering in a tracking application
EUROCAST'07 Proceedings of the 11th international conference on Computer aided systems theory
Super-resolution image reconstruction based on K-means-Markov network
ICNC'09 Proceedings of the 5th international conference on Natural computation
International Journal of Sensor Networks
A graphical model framework for coupling MRFs and deformable models
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Loopy belief propagation for approximate inference: an empirical study
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Color correction of underwater images for aquatic robot inspection
EMMCVPR'05 Proceedings of the 5th international conference on Energy Minimization Methods in Computer Vision and Pattern Recognition
CVBIA'05 Proceedings of the First international conference on Computer Vision for Biomedical Image Applications
Hi-index | 0.00 |
Local belief propagation rules of the sort proposed by Pearl(1988) are guaranteed to converge to the optimal beliefs for singly connected networks. Recently, a number of researchers have empirically demonstrated good performance of these same algorithms on networks with loops, but a theoretical understanding of this performance has yet to be achieved. Here we lay the foundation for an understanding of belief propagation in networks with loops. For networks with a single loop, we derive ananalytical relationship between the steady state beliefs in the loopy network and the true posterior probability. Using this relationship we show a category of networks for which the MAP estimate obtained by belief update and by belief revision can be proven to be optimal(although the beliefs will be incorrect). We show how nodes can use local information in the messages they receive in order to correct the steady state beliefs. Furthermore we prove that for all networks with a single loop, the MAP estimate obtained by belief revisionat convergence is guaranteed to give the globally optimal sequence of states. The result is independent of the length of the cycle and the size of the statespace. For networks with multiple loops, we introduce the concept of a ``balanced network'''' and show simulati.