Communications of the ACM
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
The Strength of Weak Learnability
Machine Learning
An introduction to computational learning theory
An introduction to computational learning theory
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Boosting as entropy projection
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
MadaBoost: A Modification of AdaBoost
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Artificial Intelligence Research
Boosting recombined weak classifiers
Pattern Recognition Letters
Real boosting a la carte with an application to boosting oblique decision trees
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
A numerical refinement operator based on multi-instance learning
ILP'10 Proceedings of the 20th international conference on Inductive logic programming
Hi-index | 0.00 |
Scaling discrete AdaBoost to handle real-valued weak hypotheses has often been done under the auspices of convex optimization, but little is generally known from the original boosting model standpoint. We introduce a novel generalization of discrete AdaBoost which departs from this mainstream of algorithms. From the theoretical standpoint, it formally displays the original boosting property; furthermore, it brings interesting computational and numerical improvements that make it significantly easier to handle “as is”. Conceptually speaking, it provides a new and appealing scaling to R of some well known facts about discrete (ada)boosting. Perhaps the most popular is an iterative weight modification mechanism, according to which examples have their weights decreased iff they receive the right class by the current discrete weak hypothesis. Our generalization to real values makes that decreasing weights affect only the examples on which the hypothesis' margin exceeds its average margin. Thus, while both properties coincide on the discrete case, examples that receive the right class can still be reweighted higher with real-valued weak hypotheses. From the experimental standpoint, our generalization displays the ability to produce low error formulas with particular cumulative margin distributions, and it provides a nice handling of those noisy domains that represent Achilles' heel for common Adaptive Boosting algorithms.