Random classification noise defeats all convex potential boosters
Proceedings of the 25th international conference on Machine learning
Collective-agreement-based pruning of ensembles
Computational Statistics & Data Analysis
Boosting with pairwise constraints
Neurocomputing
Expert Systems with Applications: An International Journal
New specifics for a hierarchial estimator meta-algorithm
ICAISC'12 Proceedings of the 11th international conference on Artificial Intelligence and Soft Computing - Volume Part II
Boosting k-NN for Categorization of Natural Scenes
International Journal of Computer Vision
A theory of multiclass boosting
The Journal of Machine Learning Research
On the doubt about margin explanation of boosting
Artificial Intelligence
The rate of convergence of AdaBoost
The Journal of Machine Learning Research
Hi-index | 0.00 |
The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n1-ε iterations---for sample size n and ε ∈ (0,1)---the sequence of risks of the classifiers it produces approaches the Bayes risk.