Learning Objects and Grasp Affordances through Autonomous Exploration

  • Authors:
  • Dirk Kraft;Renaud Detry;Nicolas Pugeault;Emre Başeski;Justus Piater;Norbert Krüger

  • Affiliations:
  • University of Southern Denmark, Denmark;University of Liège, Belgium;University of Southern Denmark, Denmark;University of Southern Denmark, Denmark;University of Liège, Belgium;University of Southern Denmark, Denmark

  • Venue:
  • ICVS '09 Proceedings of the 7th International Conference on Computer Vision Systems: Computer Vision Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation, which are then augmented by continuous characterizations of grasp affordances generated through biased, random exploration. Thus, based on a careful balance of generic prior knowledge encoded in (1) the embodiment of the system, (2) a vision system extracting structurally rich information from stereo image sequences as well as (3) a number of built-in behavioral modules on the one hand, and autonomous exploration on the other hand, the system is able to generate object and grasping knowledge through interaction with its environment.