Attractor memory with self-organizing input

  • Authors:
  • Christopher Johansson;Anders Lansner

  • Affiliations:
  • Department of Numerical Analysis and Computer Science, Royal Institute of Technology, Stockholm, Sweden;Department of Numerical Analysis and Computer Science, Royal Institute of Technology, Stockholm, Sweden

  • Venue:
  • BioADIT'06 Proceedings of the Second international conference on Biologically Inspired Approaches to Advanced Information Technology
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a neural network based autoassociative memory system for unsupervised learning. This system is intended to be an example of how a general information processing architecture, similar to that of neocortex, could be organized. The neural network has its units arranged into two separate groups called populations, one input and one hidden population. The units in the input population form receptive fields that sparsely projects onto the units of the hidden population. Competitive learning is used to train these forward projections. The hidden population implements an attractor memory. A back projection from the hidden to the input population is trained with a Hebbian learning rule. This system is capable of processing correlated and densely coded patterns, which regular attractor neural networks are very poor at. The system shows good performance on a number of typical attractor neural network tasks such as pattern completion, noise reduction, and prototype extraction.