From pixels to objects: enabling a spatial model for humanoid social robots

  • Authors:
  • Dario Figueira;Manuel Lopes;Rodrigo Ventura;Jonas Ruesch

  • Affiliations:
  • Institute for System and Robotics, Instituto Superior Técnico, TU Lisbon, Portugal;Institute for System and Robotics, Instituto Superior Técnico, TU Lisbon, Portugal;Institute for System and Robotics, Instituto Superior Técnico, TU Lisbon, Portugal;Artificial Intelligence Laboratory, Department of Informatics, University of Zurich, Switzerland

  • Venue:
  • ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work adds the concept of object to an existent low-level attention system of the humanoid robot iCub. The objects are defined as clusters of SIFT visual features. When the robot first encounters an unknown object, found to be within a certain (small) distance from its eyes, it stores a cluster of the features present within an interval about that distance, using depth perception. Whenever a previously stored object crosses the robot's field of view again, it is recognized, mapped into an egocentrical frame of reference, and gazed at. This mapping is persistent, in the sense that its identification and position are kept even if not visible by the robot. Features are stored and recognized in a bottom-up way. Experimental results on the humanoid robot iCub validate this approach. This work creates the foundation for a way of linking the bottom-up attention system with top-down, object-oriented information provided by humans.