Different Paradigms for Choosing Sequential Reweighting Algorithms

  • Authors:
  • Gilles Blanchard

  • Affiliations:
  • Dé/partement de Mathé/matiques, Université/ Paris-Sud, 91405 Orsay Cedex, France/ and Fraunhofer FIRST Kekulé/str. 7, 12489 Berlin, Germany

  • Venue:
  • Neural Computation
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Analyses of the success of ensemble methods in classification have pointed out the important role played by the margin distribution function on the training and test sets. While it is acknowledged that one should generally try to achieve high margins on the training set, the more precise shape of the empirical margin distribution function one should favor in practice is subject to different approaches.We first present two concurrent philosophies for choosing the empirical margin profile: the minimax margin paradigm and the mean and variance paradigm. The best-known representative of the first paradigm is the AdaBoost algorithm, and this philosophy has been shown by several other authors to be closely related to the principle of the support vector machine. We show that the second paradigm is very close in spirit to Fisher's linear discriminant (in a feature space).We construct two boosting-type algorithms, very similar in their form, dedicated to one or the other philosophy. We consequently derive by interpolation a very simple family of iterative reweighting algorithms that can be understood as different trade-offs between the two paradigms and argue from experiments that this can allow for a suitable adaptivity to different classification problems, particularly in the presence of noise or excessive complexity of the base classifiers.