A Short Review of Statistical Learning Theory

  • Authors:
  • Massimiliano Pontil

  • Affiliations:
  • -

  • Venue:
  • WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Statistical learning theory has emerged in the last few years as a solid and elegant frameworkfor studying the problem of learning from examples. Unlike previous "classical" learning techniques, this theory completely characterizes the necessary and sufficient conditions for a learning algorithm to be consistent. The key quantity is the capacity of the set of hypotheses employed in the learning algorithm and the goal is to control this capacity depending on the given examples. Structural risk minimization (SRM) is the main theoretical algorithm which implements this idea. SRMis inspired and closely related to regularization theory. For practical purposes, however, SRM is a very hard problem and impossible to implement when dealing with a large number of examples. Techniques such as support vector machines and older regularization networks are a viable solution to implement the idea of capacity control. The paper also discusses how these techniques can be formulated as a variational problem in a Hilbert space and show how SRM can be extended in order to implement both classical regularization networks and support vector machines.