Leave One Out Error, Stability, and Generalization of Voting Combinations of Classifiers

  • Authors:
  • Theodoros Evgeniou;Massimiliano Pontil;André Elisseeff

  • Affiliations:
  • Technology Management, INSEAD, Boulevard de Constance, 77305 Fontainebleau, France. theodoros.evgeniou@insead.edu;DII, University of Siena, Via Roma 56, 53100 Siena, Italy. pontil@dii.unisi.it;Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tübingen, Germany. andre.elisseeff@tuebingen.mpg.de

  • Venue:
  • Machine Learning
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study the leave-one-out and generalization errors of voting combinations of learning machines. A special case considered is a variant of bagging. We analyze in detail combinations of kernel machines, such as support vector machines, and present theoretical estimates of their leave-one-out error. We also derive novel bounds on the stability of combinations of any classifiers. These bounds can be used to formally show that, for example, bagging increases the stability of unstable learning machines. We report experiments supporting the theoretical findings.