Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Proceedings of the 1998 conference on Advances in neural information processing systems II
MFDTs: Mean Field Dynamic Trees
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 3
Variational Learning for Switching State-Space Models
Neural Computation
Correctness of Local Probability Propagation in Graphical Models with Loops
Neural Computation
Hi-index | 0.00 |
Dynamic trees are mixtures of tree structured belief networks. They solve some of the problems of fixed tree networks at the cost of making exact inference intractable. For this reason approximate methods such as sampling or mean field approaches have been used. However, mean field approximations assume a factorised distribution over node states. Such a distribution seems unlikely in the posterior, as nodes are highly correlated in the prior. Here a structured variational approach is used, where the posterior distribution over the non-evidential nodes is itself approximated by a dynamic tree. It turns out that this form can be used tractably and efficiently. The result is a set of update rules which can propagate information through the network to obtain both a full variational approximation, and the relevant marginals. The propagation rules are more efficient than the mean field approach and give noticeable quantitative and qualitative improvement in the inference. The marginals calculated give better approximations to the posterior than loopy propagation on a small toy problem.