Improvements to adaboost dynamic

  • Authors:
  • Erico N. de Souza;Stan Matwin

  • Affiliations:
  • School of Information Technology and Engineering, University of Ottawa, Ottawa, ON, Canada;School of Information Technology and Engineering, University of Ottawa, Ottawa, ON, Canada

  • Venue:
  • Canadian AI'12 Proceedings of the 25th Canadian conference on Advances in Artificial Intelligence
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents recent results in extending the well known Machine Learning ensemble method, boosting. The main idea is to vary the "weak" base classifier with each step of the method, using a classifier which performs "best" on the data presented in that iteration. We show that the solution is sensitive to the loss function used, and that the exponential loss function is detrimental to the performance of this kind of boosting. An approach which uses a logistic loss function performs better, but tends to overfit with a growing number of iterations. We show that this drawback can be overcome with the use of resampling technique, taken from the research on learning from imbalanced data.