Error bounds for aggressive and conservative AdaBoost

  • Authors:
  • Ludmila I. Kuncheva

  • Affiliations:
  • School of Informatics, University of Wales, Bangor, Bangor, Gwynedd, UK

  • Venue:
  • MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Three AdaBoost variants are distinguished based on the strategies applied to update the weights for each new ensemble member. The classic AdaBoost due to Freund and Schapire only decreases the weights of the correctly classified objects and is conservative in this sense. All the weights are then updated through a normalization step. Other AdaBoost variants in the literature update all the weights before renormalizing (aggressive variant). Alternatively we may increase only the weights of misclassified objects and then renormalize (the second conservative variant). The three variants have different bounds on their training errors. This could indicate different generalization performances. The bounds are derived here following the proof by Freund and Schapire for the classical AdaBoost for multiple classes (AdaBoost.M1), and compared against each other. The aggressive variant and the less popular of the two conservative variants have lower error bounds than the classical AdaBoost. Also, whereas the coefficients βi in the classical AdaBoost are found as the unique solution of a minimization problem on the bound, the aggressive and the second conservative variants have monotone increasing functions of βi (0 le; βi ≤ 1) as their bounds, giving infinitely many choices of βi.