Model complexity control and statisticallearning theory

  • Authors:
  • Vladimir Cherkassky

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA (E-mail: cherkass@ece.umn.edu)

  • Venue:
  • Natural Computing: an international journal
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We discuss the problem of modelcomplexity control also known as modelselection. This problem frequently arises inthe context of predictive learning and adaptiveestimation of dependencies from finite data.First we review the problem of predictivelearning as it relates to model complexitycontrol. Then we discuss several issuesimportant for practical implementation ofcomplexity control, using the frameworkprovided by Statistical Learning Theory (orVapnik-Chervonenkis theory). Finally, we showpractical applications of Vapnik-Chervonenkis(VC) generalization bounds for model complexitycontrol. Empirical comparisons of differentmethods for complexity control suggestpractical advantages of using VC-based modelselection in settings where VC generalizationbounds can be rigorously applied. We also arguethat VC-theory provides methodologicalframework for complexity control even when itstechnical results can not be directly applied.