Fat-shattering and the learnability of real-valued functions

  • Authors:
  • Peter L. Bartlett;Philip M. Long;Robert C. Williamson

  • Affiliations:
  • Department of Systems Engineering, RSISE, Australian National University, Canberra, 0200 Australia;Computer Science Department, Duke University, P.O. Box 90129, Durham, NC;Department of Engineering, Australian National University, Canberra, 0200 Australia

  • Venue:
  • COLT '94 Proceedings of the seventh annual conference on Computational learning theory
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of learning real-valued functions from random examples when the function values are corrupted with noise. With mild conditions on independent observation noise, we provide characterizations of the learnability of a real-valued function class in terms of a generalization of the Vapnik-Chervonenkis dimension, the fat shattering function, introduced by Kearns and Schapire. We show that, given some restrictions on the noise, a function class is learnable in our model if and only if its fat-shattering function is finite. With different (also quite mild) restrictions, satisfied for example by gaussian noise, we show that a function class is learnable from polynomially many examples if and only if its fat-shattering function grows polynomially. We prove analogous results in an agnostic setting, where there is no assumption of an underlying function class.