Boosting in the limit: maximizing the margin of learned ensembles

  • Authors:
  • Adam J. Grove;Dale Schuurmans

  • Affiliations:
  • -;-

  • Venue:
  • AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

The "minimum margin" of an ensemble classifier on a given training set is, roughly speaking, the smallest vote it gives to any correct training label. Recent work has shown that the Adaboost algorithm is particularly effective at producing ensembles with large minimum margins, and theory suggests that this may account for its success at reducing generalization error. We note, however, that the problem of finding good margins is closely related to linear programming, and we use this connection to derive and test new "LPboosting" algorithms that achieve better minimum margins than Adaboost.However, these algorithms do not always yield better generalization performance. In fact, more often the opposite is true. We report on a series of controlled experiments which show that no simple version of the minimum-margin story can be complete. We conclude that the crucial question as to why boosting works so well in practice, and how to further improve upon it, remains mostly open.Some of our experiments are interesting for another reason: we show that Adaboost sometimes does overfit--eventually. This may take a very long time to occur, however, which is perhaps why this phenomenon has gone largely unnoticed.