Stability and generalization

  • Authors:
  • Olivier Bousquet;André Elisseeff

  • Affiliations:
  • CMAP, Ecole Polytechnique, F-91128 Palaiseau, FRANCE;BIOwwlf Technologies, 305 Broadway, New-York, NY

  • Venue:
  • The Journal of Machine Learning Research
  • Year:
  • 2002

Quantified Score

Hi-index 0.18

Visualization

Abstract

We define notions of stability for learning algorithms and show how to use these notions to derive generalization error bounds based on the empirical error and the leave-one-out error. The methods we use can be applied in the regression framework as well as in the classification one when the classifier is obtained by thresholding a real-valued function. We study the stability properties of large classes of learning algorithms such as regularization based algorithms. In particular we focus on Hilbert space regularization and Kullback-Leibler regularization. We demonstrate how to apply the results to SVM for regression and classification.