Discriminative structure and parameter learning for Markov logic networks
Proceedings of the 25th international conference on Machine learning
Laplace maximum margin Markov networks
Proceedings of the 25th international conference on Machine learning
Graphical Models, Exponential Families, and Variational Inference
Foundations and Trends® in Machine Learning
Boosting with structural sparsity
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning from measurements in exponential families
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning combination features with L1 regularization
NAACL-Short '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
Maximum Entropy Discrimination Markov Networks
The Journal of Machine Learning Research
A sampling-based approach to computing equilibria in succinct extensive-form games
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
PAC-Bayesian Analysis of Co-clustering and Beyond
The Journal of Machine Learning Research
Entire relaxation path for maximum entropy problems
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Computation of channel capacity based on self-concordant functions
Journal of Electrical and Computer Engineering
A tractable combinatorial market maker using constraint generation
Proceedings of the 13th ACM Conference on Electronic Commerce
Hi-index | 0.00 |
We present a unified and complete account of maximum entropy density estimation subject to constraints represented by convex potential functions or, alternatively, by convex regularization. We provide fully general performance guarantees and an algorithm with a complete convergence proof. As special cases, we easily derive performance guarantees for many known regularization types, including l1, l2, l22, and l2 + l22 style regularization. We propose an algorithm solving a large and general subclass of generalized maximum entropy problems, including all discussed in the paper, and prove its convergence. Our approach generalizes and unifies techniques based on information geometry and Bregman divergences as well as those based more directly on compactness. Our work is motivated by a novel application of maximum entropy to species distribution modeling, an important problem in conservation biology and ecology. In a set of experiments on real-world data, we demonstrate the utility of maximum entropy in this setting. We explore effects of different feature types, sample sizes, and regularization levels on the performance of maxent, and discuss interpretability of the resulting models.