Reinforcement learning by chaotic exploration generator in target capturing task

  • Authors:
  • Koichiro Morihiro;Teijiro Isokawa;Nobuyuki Matsui;Haruhiko Nishimura

  • Affiliations:
  • Hyogo University of Teacher Education, Hyogo, Japan;Himeji Institute of Technology, Hyogo, Japan;Himeji Institute of Technology, Hyogo, Japan;Graduate School of Applied Informatics, University of Hyogo, Hyogo, Japan

  • Venue:
  • KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The exploration, that is a process of trial and error, plays a very important role in reinforcement learning. As a generator for exploration, it seems to be familiar to use the uniform pseudorandom number generator. However, it is known that chaotic source also provides a random-like sequence as like as stochastic source. Applying this random-like feature of deterministic chaos for a generator of the exploration, we already found that the deterministic chaotic generator for the exploration based on the logistic map gives better performances than the stochastic random exploration generator in a nonstationary shortcut maze problem. In this research, in order to make certain such a difference of the performance, we examine target capturing as another nonstationary task. The simulation result in this task approves the result in our previous work.