Generalization and PAC learning: some new results for the class of generalized single-layer networks

  • Authors:
  • S. B. Holden;P. J.W. Rayner

  • Affiliations:
  • Dept. of Eng., Cambridge Univ.;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ability of connectionist networks to generalize is often cited as one of their most important properties. We analyze the generalization ability of the class of generalized single-layer networks (GSLNs), which includes Volterra networks, radial basis function networks, regularization networks, and the modified Kanerva model, using techniques based on the theory of probably approximately correct (PAC) learning which have previously been used to analyze the generalization ability of feedforward networks of linear threshold elements (LTEs). An introduction to the relevant computational learning theory is included. We derive necessary and sufficient conditions on the number of training examples required by a GSLN to guarantee a particular generalization performance. We compare our results to those given previously for feedforward networks of LTEs and show that, on the basis of the currently available bounds, the sufficient number of training examples for GSLNs is typically considerably less than for feedforward networks of LTEs with the same number of weights. We show that the use of self-structuring techniques for GSLNs may reduce the number of training examples sufficient to guarantee good generalization performance, and we provide an explanation for the fact that GSLNs can require a relatively large number of weights