Localist Attractor Networks

  • Authors:
  • Richard S. Zemel;Michael C. Mozer

  • Affiliations:
  • Department of Computer Science, University of Toronto, Toronto, ON M5S 1A4, Canada;Department of Computer Science, University of Colorado, Boulder, CO 80309-0430, U.S.A.

  • Venue:
  • Neural Computation
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Attractor networks, which map an input space to a discrete output space, are useful for pattern completion - cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attractor net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters. We present simulation experiments that explore the behavior of localist attractor networks, showing that they yield few spurious attractors, and they readily exhibit two desirable properties of psychological and neurobiological models: priming (faster convergence to an attractor if the attractor has been recently visited) and gang effects (in which the presence of an attractor enhances the attractor basins of neighboring attractors).