Can adaboost.m1 learn incrementally? a comparison to learn++ under different combination rules

  • Authors:
  • Hussein Syed Mohammed;James Leander;Matthew Marbach;Robi Polikar

  • Affiliations:
  • Electrical and Computer Engineering, Rowan University, Glassboro, NJ;Electrical and Computer Engineering, Rowan University, Glassboro, NJ;Electrical and Computer Engineering, Rowan University, Glassboro, NJ;Electrical and Computer Engineering, Rowan University, Glassboro, NJ

  • Venue:
  • ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We had previously introduced Learn++, inspired in part by the ensemble based AdaBoost algorithm, for incrementally learning from new data, including new concept classes, without forgetting what had been previously learned. In this effort, we compare the incremental learning performance of Learn++ and AdaBoost under several combination schemes, including their native, weighted majority voting. We show on several databases that changing AdaBoost’s distribution update rule from hypothesis based update to ensemble based update allows significantly more efficient incremental learning ability, regardless of the combination rule used to combine the classifiers.