Learning and generalization in radial basis function networks

  • Authors:
  • J. A. S. Freeman;D. Saad

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1995

Quantified Score

Hi-index 0.02

Visualization

Abstract

The two-layer radial basis function network, with fixed centersof the basis functions, is analyzed within a stochastic trainingparadigm. Various definitions of generalization error areconsidered, and two such definitions are employed in derivinggeneric learning curves and generalization properties, both withand without a weight decay term. The generalization error is shownanalytically to be related to the evidence and, via the evidence,to the prediction error and free energy. The generalizationbehavior is explored; the generic learning curve is found to beinversely proportional to the number of training pairs presented.Optimization of training is considered by minimizing thegeneralization error with respect to the free parameters of thetraining algorithms. Finally, the effect of the joint activationsbetween hidden-layer units is examined and shown to speedtraining.