Vision guided manipulation for planetary robotics - position control

  • Authors:
  • Kevin Nickels;Matthew DiCicco;Max Bajracharya;Paul Backes

  • Affiliations:
  • Department of Engineering Science, Trinity University, One Trinity Place, San Antonio, TX, 78212-7200, USA;Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, CA 91109, USA;Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, CA 91109, USA;Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., Pasadena, CA 91109, USA

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Manipulation systems for planetary exploration operate under severe restrictions. They need to integrate vision and manipulation to achieve the reliability, safety, and predictability required of expensive systems operating on remote planets. They also must operate on very modest hardware that is shared with many other systems, and must operate without human intervention. Typically such systems employ calibrated stereo cameras and calibrated manipulators to achieve precision of the order of one centimeter with respect to instrument placement activities. This paper presents three complementary approaches to vision guided manipulation designed to robustly achieve high precision in manipulation. These approaches are described and compared, both in simulation and on hardware. In situ estimation and adaptation of the manipulator and/or camera models in these methods account for changes in the system configuration, thus ensuring consistent precision for the life of the mission. All the three methods provide several-fold increases in accuracy of manipulator positioning over the standard flight approach.