BAM learning of nonlinearly separable tasks by using an asymmetrical output function and reinforcement learning

  • Authors:
  • Sylvain Chartier;Mounir Boukadoum;Mahmood Amiri

  • Affiliations:
  • School of Psychology, University of Ottawa, Ottawa, ON, Canada;Department of Computer Science, Université du Québec à Montréal, Montréal, QC, Canada;School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most bidirectional associative memory (BAM) networks use a symmetrical output function for dual fixed-point behavior. In this paper, we show that by introducing an asymmetry parameter into a recently introduced chaotic BAM output function, prior knowledge can be used to momentarily disable desired attractors from memory, hence biasing the search space to improve recall performance. This property allows control of chaotic wandering, favoring given subspaces over others. In addition, reinforcement learning can then enable a dual BAM architecture to store and recall nonlinearly separable patterns. Our results allow the same BAM framework to model three different types of learning: supervised, reinforcement, and unsupervised. This ability is very promising from the cognitive modeling viewpoint. The new BAM model is also useful from an engineering perspective; our simulations results reveal a notable overall increase in BAM learning and recall performances when using a hybrid model with the general regression neural network (GRNN).