Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
An introduction to variational methods for graphical models
Learning in graphical models
Approximate inference in Boltzmann machines
Artificial Intelligence
Expectation Propagation for approximate Bayesian inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Robust probabilistic inference in distributed systems
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Loopy Belief Propagation: Convergence and Effects of Message Errors
The Journal of Machine Learning Research
Graphical models and message-passing algorithms for network-constrained decision problems
Graphical models and message-passing algorithms for network-constrained decision problems
Decentralized Detection in Undirected Network Topologies
SSP '07 Proceedings of the 2007 IEEE/SP 14th Workshop on Statistical Signal Processing
Nonparametric belief propagation
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Loopy belief propagation and Gibbs measures
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Distributed Detection in Sensor Networks With Packet Losses and Finite Capacity Links
IEEE Transactions on Signal Processing
Data association based on optimization in graphical models with application to sensor networks
Mathematical and Computer Modelling: An International Journal
The geometry of turbo-decoding dynamics
IEEE Transactions on Information Theory
Constructing free-energy approximations and generalized belief propagation algorithms
IEEE Transactions on Information Theory
Turbo decoding as an instance of Pearl's “belief propagation” algorithm
IEEE Journal on Selected Areas in Communications
Hi-index | 0.00 |
Motivated by distributed inference applications in unreliable communication networks, we adapt the popular (sum-product) belief propagation (BP) algorithm under the constraint of discrete-valued messages. We show that, in contrast to conventional BP, the optimal message-generation rules are node-dependent and iteration-dependent, each rule making explicit use of local memory from all past iterations. These results expose both the intractability of optimal design and an inherent structure that can be exploited for tractable approximate design. We propose one such approximation and demonstrate its efficacy on canonical examples. We also discuss extensions to communication networks with lossy links (e.g., erasures) or topologiesthat differ from the graph underlying the probabilistic model.