A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Logistic Regression, AdaBoost and Bregman Distances
Machine Learning
Resampling vs Reweighting in Boosting a Relational Weak Learner
AI*IA 01 Proceedings of the 7th Congress of the Italian Association for Artificial Intelligence on Advances in Artificial Intelligence
Extending adaboost to iteratively vary its base classifiers
Canadian AI'11 Proceedings of the 24th Canadian conference on Advances in artificial intelligence
Hi-index | 0.00 |
This paper presents recent results in extending the well known Machine Learning ensemble method, boosting. The main idea is to vary the "weak" base classifier with each step of the method, using a classifier which performs "best" on the data presented in that iteration. We show that the solution is sensitive to the loss function used, and that the exponential loss function is detrimental to the performance of this kind of boosting. An approach which uses a logistic loss function performs better, but tends to overfit with a growing number of iterations. We show that this drawback can be overcome with the use of resampling technique, taken from the research on learning from imbalanced data.