A reinforcement learning with switching controllers for a continuous action space

  • Authors:
  • Masato Nagayoshi;Hajime Murao;Hisashi Tamaki

  • Affiliations:
  • Niigata College of Nursing, Joetsu, Japan 943-0147;Faculty of Cross-Cultural Studies, Kobe University, Kobe, Japan;Graduate School of Engineering, Kobe University, Kobe, Japan

  • Venue:
  • Artificial Life and Robotics
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reinforcement learning (RL) attracts much attention as a technique for realizing computational intelligence such as adaptive and autonomous decentralized systems. In general, however, it is not easy to put RL to practical use. This difficulty includes the problem of designing a suitable action space for an agent, i.e., satisfying two requirements in trade-off: (i) to keep the characteristics (or structure) of an original search space as much as possible in order to seek strategies that lie close to the optimal, and (ii) to reduce the search space as much as possible in order to expedite the learning process. In order to design a suitable action space adaptively, in this article, we propose a RL model with switching controllers based on Q-learning and an actor-critic to mimic the process of an infant's motor development in which gross motor skills develop before fine motor skills. Then a method for switching controllers is constructed by introducing and referring to the "entropy." Further, through computational experiments by using a path-planning problem with continuous action space, the validity and potential of the proposed method have been confirmed.