Learning, planning, and control for quadruped locomotion over challenging terrain

  • Authors:
  • Mrinal Kalakrishnan;Jonas Buchli;Peter Pastor;Michael Mistry;Stefan Schaal

  • Affiliations:
  • Computational Learning and Motor Control Lab, University of Southern California, Los Angeles, CA 90089, USA;Computational Learning and Motor Control Lab, University of Southern California, Los Angeles, CA 90089, USA;Computational Learning and Motor Control Lab, University of Southern California, Los Angeles, CA 90089, USA;Disney Research, Pittsburgh, PA 15213, USA;Computational Learning and Motor Control Lab, University of Southern California, Los Angeles, CA 90089, USA

  • Venue:
  • International Journal of Robotics Research
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a control architecture for fast quadruped locomotion over rough terrain. We approach the problem by decomposing it into many sub-systems, in which we apply state-of-the-art learning, planning, optimization, and control techniques to achieve robust, fast locomotion. Unique features of our control strategy include: (1) a system that learns optimal foothold choices from expert demonstration using terrain templates, (2) a body trajectory optimizer based on the Zero-Moment Point (ZMP) stability criterion, and (3) a floating-base inverse dynamics controller that, in conjunction with force control, allows for robust, compliant locomotion over unperceived obstacles. We evaluate the performance of our controller by testing it on the LittleDog quadruped robot, over a wide variety of rough terrains of varying difficulty levels. The terrain that the robot was tested on includes rocks, logs, steps, barriers, and gaps, with obstacle sizes up to the leg length of the robot. We demonstrate the generalization ability of this controller by presenting results from testing performed by an independent external test team on terrain that has never been shown to us.