A maximum entropy approach to natural language processing
Computational Linguistics
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Logistic Regression, AdaBoost and Bregman Distances
Machine Learning
Convex Optimization
A maximum entropy approach to species distribution modeling
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Sparse Multinomial Logistic Regression: Fast Algorithms and Generalization Bounds
IEEE Transactions on Pattern Analysis and Machine Intelligence
A comparison of algorithms for maximum entropy parameter estimation
COLING-02 proceedings of the 6th conference on Natural language learning - Volume 20
Evaluation and extension of maximum entropy models with inequality constraints
EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing
Unifying divergence minimization and statistical inference via convex duality
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Value Regularization and Fenchel Duality
The Journal of Machine Learning Research
Hierarchical maximum entropy density estimation
Proceedings of the 24th international conference on Machine learning
Estimating labels from label proportions
Proceedings of the 25th international conference on Machine learning
A Hilbert Space Embedding for Distributions
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Maximum entropy inverse reinforcement learning
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Performance prediction for exponential language models
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Estimating Labels from Label Proportions
The Journal of Machine Learning Research
Maximum causal entropy correlated equilibria for Markov games
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Unifying divergence minimization and statistical inference via convex duality
COLT'06 Proceedings of the 19th annual conference on Learning Theory
The Journal of Machine Learning Research
PROBABILISTIC MODELS FOR FOCUSED WEB CRAWLING
Computational Intelligence
Hi-index | 0.00 |
We present a unified and complete account of maximum entropy distribution estimation subject to constraints represented by convex potential functions or, alternatively, by convex regularization. We provide fully general performance guarantees and an algorithm with a complete convergence proof. As special cases, we can easily derive performance guarantees for many known regularization types, including ℓ1, ℓ2, $\ell_{\rm 2}^{\rm 2}$and ℓ1 + $\ell_{\rm 2}^{\rm 2}$style regularization. Furthermore, our general approach enables us to use information about the structure of the feature space or about sample selection bias to derive entirely new regularization functions with superior guarantees. We propose an algorithm solving a large and general subclass of generalized maxent problems, including all discussed in the paper, and prove its convergence. Our approach generalizes techniques based on information geometry and Bregman divergences as well as those based more directly on compactness.