SIAM Review
Learning with mixtures of trees
The Journal of Machine Learning Research
Beyond independent components: trees and clusters
The Journal of Machine Learning Research
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Learning Factor Graphs in Polynomial Time and Sample Complexity
The Journal of Machine Learning Research
Reconstruction of Markov Random Fields from Samples: Some Observations and Algorithms
APPROX '08 / RANDOM '08 Proceedings of the 11th international workshop, APPROX 2008, and 12th international workshop, RANDOM 2008 on Approximation, Randomization and Combinatorial Optimization: Algorithms and Techniques
Learning Gaussian tree models: analysis of error exponents and extremal structures
IEEE Transactions on Signal Processing
Proofs from THE BOOK
The Journal of Machine Learning Research
The optimal error exponent for Markov order estimation
IEEE Transactions on Information Theory
Claude E. Shannon: a retrospective on his life, work, and impact
IEEE Transactions on Information Theory
Optimal error exponents in hidden Markov models order estimation
IEEE Transactions on Information Theory
Information projections revisited
IEEE Transactions on Information Theory
Context tree estimation for not necessarily finite memory processes, via BIC and MDL
IEEE Transactions on Information Theory
A Large-Deviation Analysis of the Maximum-Likelihood Learning of Markov Tree Structures
IEEE Transactions on Information Theory
Learning Latent Tree Graphical Models
The Journal of Machine Learning Research
High-dimensional Gaussian graphical model selection: walk summability and local separation criterion
The Journal of Machine Learning Research
A survey on latent tree models and applications
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
The problem of learning forest-structured discrete graphical models from i.i.d. samples is considered. An algorithm based on pruning of the Chow-Liu tree through adaptive thresholding is proposed. It is shown that this algorithm is both structurally consistent and risk consistent and the error probability of structure learning decays faster than any polynomial in the number of samples under fixed model size. For the high-dimensional scenario where the size of the model d and the number of edges k scale with the number of samples n, sufficient conditions on (n,d,k) are given for the algorithm to satisfy structural and risk consistencies. In addition, the extremal structures for learning are identified; we prove that the independent (resp., tree) model is the hardest (resp., easiest) to learn using the proposed algorithm in terms of error rates for structure learning.