An Autoassociative Neural Network Model of Paired-Associate Learning

  • Authors:
  • Daniel S. Rizzuto;Michael J. Kahana

  • Affiliations:
  • Volen Center for Complex Systems, Brandeis University, Waltham, MA 02454, U.S.A.;Volen Center for Complex Systems, Brandeis University, Waltham, MA 02454, U.S.A.

  • Venue:
  • Neural Computation
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Hebbian heteroassociative learning is inherently asymmetric. Storing a forward association, from item A to item B, enables recall of B (given A), but does not permit recall of A (given B). Recurrent networks can solve this problem by associating A to B and B back to A. In these recurrent networks, the forward and backward associations can be differentially weighted to account for asymmetries in recall performance. In the special case of equal strength forward and backward weights, these recurrent networks can be modeled as a single autoassociative network where A and B are two parts of a single, stored pattern. We analyze a general, recurrent neural network model of associative memory and examine its ability to fit a rich set of experimental data on human associative learning. The model fits the data significantly better when the forward and backward storage strengths are highly correlated than when they are less correlated. This network-based analysis of associative learning supports the view that associations between symbolic elements are better conceptualized as a blending of two ideas into a single unit than as separately modifiable forward and backward associations linking representations in memory.