Linear least-squares algorithms for temporal difference learning
Machine Learning - Special issue on reinforcement learning
Reinforcement learning based on on-line EM algorithm
Proceedings of the 1998 conference on Advances in neural information processing systems II
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Actor-critic algorithms
Learning CPG-based Biped Locomotion with a Policy Gradient Method: Application to a Humanoid Robot
International Journal of Robotics Research
Policy Learning --- A Unified Perspective with Applications in Robotics
Recent Advances in Reinforcement Learning
Learning CPG sensory feedback with policy gradient for biped locomotion for a full-body humanoid
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
Hi-index | 0.00 |
Animal's rhythmic movements such as locomotion are considered to be controlled by neural circuits called central pattern generators (CPGs). This article presents a reinforcement learning (RL) method for a CPG controller, which is inspired by the control mechanism of animals. Because the CPG controller is an instance of recurrent neural networks, a naive application of RL involves difficulties. In addition, since state and action spaces of controlled systems are very large in real problems such as robot control, the learning of the value function is also difficult. In this study, we propose a learning scheme for a CPG controller called a CPG-actor-critic model, whose learning algorithm is based on a policy gradient method. We apply our RL method to autonomous acquisition of biped locomotion by a biped robot simulator. Computer simulations show our method is able to train a CPG controller such that the learning process is stable.