Knowledge-based recurrent neural networks in Reinforcement Learning

  • Authors:
  • LE Tien Dung;Takashi Komeda;Motoki Takagi

  • Affiliations:
  • Shibaura Institute of Technology, Minumaku, Saitama, Japan;Shibaura Institute of Technology, Minumaku, Saitama, Japan;Shibaura Institute of Technology, Minumaku, Saitama, Japan

  • Venue:
  • ASC '07 Proceedings of The Eleventh IASTED International Conference on Artificial Intelligence and Soft Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recurrent Neural Networks (RNNs) have been shown to have a strong ability to solve some hard problems. Learning time for these problems from scratch is typically very long. For supervised learning, several methods have been proposed to reuse existing knowledge in previous similar tasks. However, for unsupervised learning such as Reinforcement Learning (RL), especially for Partially Observable Markov Decision Processes (POMDPs), it is difficult to apply directly these algorithms. This paper presents several methods which have the potential of transferring of knowledge in RL using RNN: Directed Transfer, Cascade-Correlation, Mixture of Expert Systems, and Two-Level Architecture. Preliminary results of experiments in the E maze domain show the potential of these methods. Knowledge based learning time for a new problem is much shorter learning time from scratch even thought the new task looks very different from the previous tasks.