Learning to Control in Operational Space

  • Authors:
  • Jan Peters;Stefan Schaal

  • Affiliations:
  • Max Planck Institute for Biological Cybernetics, Spemannstrasse38, 72076 Tübingen, Germany, University of Southern California, 3641 Watt Way, LosAngeles, CA 90089, USA;University of Southern California, 3641 Watt Way, LosAngeles, CA 90089, USA, ATR Computational Neuroscience Laboratory, 2-2-2 Hikaridai,Seika-cho, Soraku-gun, Kyoto 619-0288, Japan

  • Venue:
  • International Journal of Robotics Research
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the most general frameworks for phrasing control problems for complex, redundant robots is operational-space control. However, while this framework is of essential importance for robotics and well understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in the face of modeling errors, which are inevitable in complex robots (e.g. humanoid robots). In this paper, we suggest a learning approach for operational-space control as a direct inverse model learning problem. A first important insight for this paper is that a physically correct solution to the inverse problem with redundant degrees of freedom does exist when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component of our work is based on the insight that many operational-space controllers can be understood in terms of a constrained optimal control problem. The cost function associated with this optimal control problem allows us to formulate a learning algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational-space controller. From the machine learning point of view, this learning problem corresponds to a reinforcement learning problem that maximizes an immediate reward. We employ an expectation-maximization policy search algorithm in order to solve this problem. Evaluations on a three degrees-of-freedom robot arm are used to illustrate the suggested approach. The application to a physically realistic simulator of the anthropomorphic SARCOS Master arm demonstrates feasibility for complex high degree-of-freedom robots. We also show that the proposed method works in the setting of learning resolved motion rate control on a real, physical Mitsubishi PA-10 medical robotics arm.