Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Elements of information theory
Elements of information theory
Efficient Approximations for the MarginalLikelihood of Bayesian Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Approximating posterior distributions in belief networks using mixtures
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
A revolution: belief propagation in graphs with cycles
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Learning the Dimensionality of Hidden Variables
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Data perturbation for escaping local maxima in learning
Eighteenth national conference on Artificial intelligence
Hierarchical Latent Class Models for Cluster Analysis
The Journal of Machine Learning Research
Naive Bayes models for probability estimation
ICML '05 Proceedings of the 22nd international conference on Machine learning
Mean field theory for sigmoid belief networks
Journal of Artificial Intelligence Research
Loopy belief propagation for approximate inference: an empirical study
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Learning Latent Tree Graphical Models
The Journal of Machine Learning Research
The role of operation granularity in search-based learning of latent tree models
JSAI-isAI'10 Proceedings of the 2010 international conference on New Frontiers in Artificial Intelligence
Model-based clustering of high-dimensional data: Variable selection versus facet determination
International Journal of Approximate Reasoning
LTC: A latent tree approach to classification
International Journal of Approximate Reasoning
A survey on latent tree models and applications
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
We propose a novel method for approximate inference in Bayesian networks (BNs). The idea is to sample data from a BN, learn a latent tree model (LTM) from the data offline, and when online, make inference with the LTM instead of the original BN. Because LTMs are tree-structured, inference takes linear time. In the meantime, they can represent complex relationship among leaf nodes and hence the approximation accuracy is often good. Empirical evidence shows that our method can achieve good approximation accuracy at low online computational cost.