Reinforcement learning for a CPG-driven biped robot

  • Authors:
  • Takeshi Mori;Yutaka Nakamura;Masa-Aki Sato;Shin Ishii

  • Affiliations:
  • Nara Institute of Science and Technology, Takayama, Ikoma, Nara;CREST, JST and Nara Institute of Science and Technology, Takayama, Ikoma, Nara;ATR Computational Neuroscience Laboratories, Soraku, Kyoto and CREST, JST;Nara Institute of Science and Technology, Takayama, Ikoma, Nara and CREST, JST

  • Venue:
  • AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Animal's rhythmic movements such as locomotion are considered to be controlled by neural circuits called central pattern generators (CPGs). This article presents a reinforcement learning (RL) method for a CPG controller, which is inspired by the control mechanism of animals. Because the CPG controller is an instance of recurrent neural networks, a naive application of RL involves difficulties. In addition, since state and action spaces of controlled systems are very large in real problems such as robot control, the learning of the value function is also difficult. In this study, we propose a learning scheme for a CPG controller called a CPG-actor-critic model, whose learning algorithm is based on a policy gradient method. We apply our RL method to autonomous acquisition of biped locomotion by a biped robot simulator. Computer simulations show our method is able to train a CPG controller such that the learning process is stable.