Passive dynamic walker controller design employing an RLS-based natural actor-critic learning algorithm

  • Authors:
  • Baeksuk Chu;Daehie Hong;Jooyoung Park;Jae-Hun Chung

  • Affiliations:
  • Department of Mechanical Engineering, Korea University, 5-1, Anam-dong, Sungbuk-gu, Seoul 136-701, Republic of Korea;Department of Mechanical Engineering, Korea University, 5-1, Anam-dong, Sungbuk-gu, Seoul 136-701, Republic of Korea;Department of Control and Instrumentation Engineering, Korea University, Jochiwon, Chungnam 339-700, Republic of Korea;Department of Mechanical Engineering, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030, USA

  • Venue:
  • Engineering Applications of Artificial Intelligence
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A passive dynamic walker belongs to a class of bipedal walking robots that are able to walk stably down a small decline without using any actuators. The purpose of this research is to design a controller in order to build actuated robots capable of walking on a flat terrain based on passive dynamic walking. To achieve this objective, a control algorithm was used based on reinforcement learning (RL). The RL method is a goal-directed learning of a mapping from situations to actions without relying on exemplary supervision or complete models of the environment. The goal of the RL method is to maximize a reward, which is an evaluative feedback from the environment. In the process of constructing the reward of the actuated passive dynamic walker, the control objective, which is stable walking on level ground, is directly included. In this study, an RL algorithm based on the actor-critic architecture and the natural gradient method is applied. Also, the recursive least-squares (RLS) method was employed for the learning process in order to improve the efficiency of the method. The control algorithm was verified with computer simulations based on the eigenvalue analysis for stable locomotion.