Fast learning in networks of locally-tuned processing units

  • Authors:
  • John Moody;Christian J. Darken

  • Affiliations:
  • Yale Computer Science, P.O. Box 2158, New Haven, CT 06520, USA;Yale Computer Science, P.O. Box 2158, New Haven, CT 06520, USA

  • Venue:
  • Neural Computation
  • Year:
  • 1989

Quantified Score

Hi-index 0.03

Visualization

Abstract

We propose a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken 1988). We consider training such networks in a completely supervised manner, but abandon this approach in favor of a more computationally efficient hybrid learning method which combines self-organized and supervised learning. Our networks learn faster than backpropagation for two reasons: the local representations ensure that only a few units respond to any given input, thus reducing computational overhead, and the hybrid learning rules are linear rather than nonlinear, thus leading to faster convergence. Unlike many existing methods for data analysis, our network architecture and learning rules are truly adaptive and are thus appropriate for real-time use.