Sample complexity for learning recurrent perceptron mappings

  • Authors:
  • B. DasGupta;E. D. Sontag

  • Affiliations:
  • Dept. of Comput. Sci., Waterloo Univ., Ont.;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

Recurrent perceptron classifiers generalize the usual perceptron model. They correspond to linear transformations of input vectors obtained by means of “autoregressive moving-average schemes”, or infinite impulse response filters, and take into account those correlations and dependences among input coordinates which arise from linear digital filtering. This paper provides tight bounds on the sample complexity associated to the fitting of such models to experimental data. The results are expressed in the context of the theory of probably approximately correct (PAC) learning