Learning grasp affordances with variable centroid offsets

  • Authors:
  • Thomas J. Palmer;Andrew H. Fagg

  • Affiliations:
  • University of Oklahoma Foundation Fellow, University of Oklahoma, Norman, OK;Computer Science and Bioengineering, University of Oklahoma, Norman, OK

  • Venue:
  • IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

When grasping an object, a robot must identify the available forms of interaction with that object. Each of these forms of interaction, a grasp affordance, describes one canonical option for placing the hand and fingers with respect to the object as an agent prepares to grasp it. The affordance does not represent a single hand posture, but an entire manifold within a space that describes hand position/orientation and finger configuration. Our challenges are 1) how to represent this manifold in as compact a manner as possible, and 2) how to extract these affordance representations given a set of example grasps as demonstrated by a human teacher. In this paper, we approach the problem of representation by capturing all instances of a canonical grasp using a joint probability density function (PDF) in a hand posture space. The PDF captures in an object-centered coordinate frame a combination of hand orientation, grasp centroid position and offset from hand to centroid. The set of canonical grasps is then represented using a mixture distribution model. We address the problem of learning the model parameters from a set of example grasps using a clustering approach based on expectation maximization. Our experiments show that the learned canonical grasps correspond to the functionally different ways that the object may be grasped. In addition, by including the grasp centroid/hand relationship within the learned model, we eliminate this as a hard-coded parameter and the resulting approach is capable of separating different grasp types, even when the different types involve similar hand postures.