Data structures and network algorithms
Data structures and network algorithms
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
A minimum spanning tree algorithm with inverse-Ackermann type complexity
Journal of the ACM (JACM)
Tractable Bayesian Learning of Tree Belief Networks
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Learning with mixtures of trees
The Journal of Machine Learning Research
Proceedings of the 24th international conference on Machine learning
Probability Density Estimation by Perturbing and Combining Tree Structured Markov Networks
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
The Journal of Machine Learning Research
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
The Necessity of Bounded Treewidth for Efficient Inference in Bayesian Networks
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
The Journal of Machine Learning Research
Data analysis with bayesian networks: a bootstrap approach
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Learning bayesian network structure from massive datasets: the «sparse candidate« algorithm
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
We consider algorithms for generating Mixtures of Bagged Markov Trees, for density estimation. In problems defined over many variables and when few observations are available, those mixtures generally outperform a single Markov tree maximizing the data likelihood, but are far more expensive to compute. In this paper, we describe new algorithms for approximating such models, with the aim of speeding up learning without sacrificing accuracy. More specifically, we propose to use a filtering step obtained as a by-product from computing a first Markov tree, so as to avoid considering poor candidate edges in the subsequently generated trees. We compare these algorithms (on synthetic data sets) to Mixtures of Bagged Markov Trees, as well as to a single Markov tree derived by the classical Chow-Liu algorithm and to a recently proposed randomized scheme used for building tree mixtures.