Self-organizing neural architecture for reinforcement learning

  • Authors:
  • Ah-Hwee Tan

  • Affiliations:
  • Intelligent Systems Centre and School of Computer Engineering, Nanyang Technological University, Singapore

  • Venue:
  • ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Self-organizing neural networks are typically associated with unsupervised learning. This paper presents a self-organizing neural architecture, known as TD-FALCON, that learns cognitive codes across multi-modal pattern spaces, involving states, actions, and rewards, and is capable of adapting and functioning in a dynamic environment with external evaluative feedback signals. We present a case study of TD-FALCON on a mine avoidance and navigation cognitive task, and illustrate its performance by comparing with a state-of-the-art reinforcement learning approach based on gradient descent backpropagation algorithm.