On the learnability of discrete distributions
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
On the learnability and usage of acyclic probabilistic finite automata
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Improved boosting algorithms using confidence-rated predictions
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Statistical methods for speech recognition
Statistical methods for speech recognition
Finite-State Language Processing
Finite-State Language Processing
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition
Boosting Neighborhood-Based Classifiers
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Boosting the margin: A new explanation for the effectiveness of voting methods
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Probabilistic DFA Inference using Kullback-Leibler Divergence and Minimality
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Inducing Probabilistic Grammars by Bayesian Model Merging
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
Improving Probabilistic Grammatical Inference Core Algorithms with Post-processing Techniques
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Contribution of Boosting in Wrapper Models
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
Multi-site data collection for a spoken language corpus
HLT '91 Proceedings of the workshop on Speech and Natural Language
Text chunking by system combination
ConLL '00 Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning - Volume 7
An empirical evaluation of bagging and boosting
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
An introduction to boosting and leveraging
Advanced lectures on machine learning
Study of ensemble strategies in discovering linear causal models
FSKD'05 Proceedings of the Second international conference on Fuzzy Systems and Knowledge Discovery - Volume Part II
Hi-index | 0.00 |
In this paper, we focus on the adaptation of boosting to density function estimation, useful in a number of fields including Natural Language Processing and Computational Biology. Previously, boosting has been used to optimize classification algorithms, improving generalization accuracy by combining many classifiers. The core of the boosting strategy, in the well-known ADABOOST algorithm [4], consists in updating the learning instance distribution, increasing (resp. decreasing) the weight of misclassified (resp. correctly classified) examples by the current classifier. Except in [17, 18], few works have attempted to exploit interesting theoretical properties of boosting (such as margin maximization) independently of a classification task. In this paper, we do not take into account classification errors to optimize a classifier, but rather density estimation errors to optimize an estimator (here a probabilistic automaton) of a given target density. Experimental results are presented showing the interest of our approach.