Almost-everywhere algorithmic stability and generalization error

  • Authors:
  • Samuel Kutin;Partha Niyogi

  • Affiliations:
  • Department of Computer Science, University of Chicago, Chicago, IL;Department of Computer Science, University of Chicago, Chicago, IL

  • Venue:
  • UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We explore in some detail the notion of algorithmic stability as a viable framework for analyzing the generalization error of learning algorithms. We introduce the new notion of training stability of a learning algorithm and show that, in a general setting, it is sufficient for good bounds on generalization error. In the PAC setting, training stability is both necessary and sufficient for learnability. The approach based on training stability makes no reference to VC dimension or VC entropy. There is no need to prove uniform convergence, and generalization error is bounded directly via an extended McDiarmid inequality. As a result it potentially allows us to deal with a broader class of learning algorithms than Empirical Risk Minimization. We also explore the relationships among VC dimension, generalization error, and various notions of stability. Several examples of learning algorithms are considered.