Temporally asymmetric learning supports sequence processing in multi-winner self-organizing maps

  • Authors:
  • Reiner Schulz;James A. Reggia

  • Affiliations:
  • Departments of Computer Science and Neurology, and UMIACS, University of Maryland, College Park, MD;Departments of Computer Science and Neurology, and UMIACS, University of Maryland, College Park, MD

  • Venue:
  • Neural Computation
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We examine the extent to which modified Kohonen self-organizing maps (SOMs) can learn unique representations of temporal sequences while still supporting map formation. Two biologically inspired extensions are made to traditional SOMs: selection of multiple simultaneous rather than single "winners" and the use of local intramap connections that are trained according to a temporally asymmetric Hebbian learning rule. The extended SOM is then trained with variable-length temporal sequences that are composed of phoneme feature vectors, with each sequence corresponding to the phonetic transcription of a noun. The model transforms each input sequence into a spatial representation (final activation pattern on the map). Training improves this transformation by, for example, increasing the uniqueness of the spatial representations of distinct sequences, while still retaining map formation based on input patterns. The closeness of the spatial representations of two sequences is found to correlate significantly with the sequences' similarity. The extended model presented here raises the possibility that SOMs may ultimately prove useful as visualization tools for temporal sequences and as preprocessors for sequence pattern recognition systems.