Temporal evolution of generalization during learning in linear networks

  • Authors:
  • Pierre Baldi;Yves Chauvin

  • Affiliations:
  • Jet Propulsion Laboratory and Division of Biology, California Institute of Technology, Pasadena, CA 91125 USA;Department of Psychology, Stanford University, Stanford, CA 94305 USA and NET-ID, Inc., Menlo Park, CA 94025 USA

  • Venue:
  • Neural Computation
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study generalization in a simple framework of feedforward linear networks with n inputs and n outputs, trained from examples by gradient descent on the usual quadratic error function. We derive analytical results on the behavior of the validation function corresponding to the LMS error function calculated on a set of validation patterns. We show that the behavior of the validation function depends critically on the initial conditions and on the characteristics of the noise. Under certain simple assumptions, if the initial weights are sufficiently small, the validation function has a unique minimum corresponding to an optimal stopping time for training for which simple bounds can be calculated. There exists also situations where the validation function can have more complicated and somewhat unexpected behavior such as multiple local minima (at most n) of variable depth and long but finite plateau effects. Additional results and possible extensions are briefly discussed.