Learning grasp strategies with partial shape information

  • Authors:
  • Ashutosh Saxena;Lawson L. S. Wong;Andrew Y. Ng

  • Affiliations:
  • Computer Science Department, Stanford University, Stanford, CA;Computer Science Department, Stanford University, Stanford, CA;Computer Science Department, Stanford University, Stanford, CA

  • Venue:
  • AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot's fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multifingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher.