Learning grasping affordances from local visual descriptors

  • Authors:
  • Luis Montesano;Manuel Lopes

  • Affiliations:
  • Instituto de Sistemas e Robotica, Instituto Superior Técnico, Lisboa, Portugal;Instituto de Sistemas e Robotica, Instituto Superior Técnico, Lisboa, Portugal

  • Venue:
  • DEVLRN '09 Proceedings of the 2009 IEEE 8th International Conference on Development and Learning
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we study the learning of affordances through self-experimentation. We study the learning of local visual descriptors that anticipate the success of a given action executed upon an object. Consider, for instance, the case of grasping. Although graspable is a property of the whole object, the grasp action will only succeed if applied in the right part of the object. We propose an algorithm to learn local visual descriptors of good grasping points based on a set of trials performed by the robot. The method estimates the probability of a successful action (grasp) based on simple local features. Experimental results on a humanoid robot illustrate how our method is able to learn descriptors of good grasping points and to generalize to novel objects based on prior experience.