Self-adjusting binary search trees
Journal of the ACM (JACM)
Nonlinear Markov networks for continuous variables
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
A tutorial on learning with Bayesian networks
Learning in graphical models
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Efficient Stepwise Selection in Decomposable Models
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Unsupervised Learning of Human Motion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Convex optimization techniques for fitting sparse Gaussian graphical models
ICML '06 Proceedings of the 23rd international conference on Machine learning
Characterizations of decomposable dependency models
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
We present a novel approach to structure learning for graphical models. By using nonparametric estimates to model clique densities in decomposable models, both discrete and continuous distributions can be handled in a unified framework. Also, consistency of the underlying probabilistic model is guaranteed. Model selection is based on predictive assessment, with efficient algorithms that allow fast greedy forward and backward selection within the class of decomposable models. We show the validity of this structure learning approach on toy data, and on two large sets of gene expression data.