Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Elements of information theory
Elements of information theory
Factorial Hidden Markov Models
Machine Learning - Special issue on learning with probabilistic representations
An introduction to variational methods for graphical models
Learning in graphical models
Improving the mean field approximation via the use of mixture distributions
Learning in graphical models
Tractable variational structures for approximating graphical models
Proceedings of the 1998 conference on Advances in neural information processing systems II
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
Variational Approximations between Mean Field Theory and the Junction Tree Algorithm
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Mean field theory for sigmoid belief networks
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Global variational approximation methods in graphical models allow efficient approximate inference of complex posterior distributions by using a simpler model. The choice of the approximating model determines a tradeoff between the complexity of the approximation procedure and the quality of the approximation. In this paper, we consider variational approximations based on two classes of models that are richer than standard Bayesian networks, Markov networks or mixture models. As such, these classes allow to find better tradeoffs in the spectrum of approximations. The first class of models are chain graphs, which capture distributions that are partially directed. The second class of models are directed graphs (Bayesian networks) with additional latent variables. Both classes allow representation of multi-variable dependencies that cannot be easily represented within a Bayesian network.