Using Diversity with Three Variants of Boosting: Aggressive, Conservative, and Inverse

  • Authors:
  • Ludmila Kuncheva;Christopher J. Whitaker

  • Affiliations:
  • -;-

  • Venue:
  • MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
  • Year:
  • 2002

Quantified Score

Hi-index 0.02

Visualization

Abstract

We lookat three variants of the boosting algorithm called here Aggressive Boosting, Conservative Boosting and Inverse Boosting. We associate the diversity measure Q with the accuracy during the progressive development of the ensembles, in the hope of being able to detect the point of "paralysis" of the training, if any. Three data sets are used: the artificial Cone-Torus data and the UCI Pima Indian Diabetes data and the Phoneme data. We run each of the three Boosting variants with two base classifier models: the quadratic classifier and a multi-layer perceptron (MLP) neural network. The three variants show different behavior, favoring in most cases the Conservative Boosting