Towards a Spatial Model for Humanoid Social Robots

  • Authors:
  • Dario Figueira;Manuel Lopes;Rodrigo Ventura;Jonas Ruesch

  • Affiliations:
  • Institute for Systems and Robotics, Instituto Superior Técnico, Lisbon, Portugal;Institute for Systems and Robotics, Instituto Superior Técnico, Lisbon, Portugal;Institute for Systems and Robotics, Instituto Superior Técnico, Lisbon, Portugal;Artificial Intelligence Laboratory, Department of Informatics, University of Zurich, Switzerland

  • Venue:
  • EPIA '09 Proceedings of the 14th Portuguese Conference on Artificial Intelligence: Progress in Artificial Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an approach to endow a humanoid robot with the capability of learning new objects and recognizing them in an unstructured environment. New objects are learnt, whenever an unrecognized one is found within a certain (small) distance from the robot head. Recognized objects are mapped to an ego-centric frame of reference, which together with a simple short-term memory mechanism, makes this mapping persistent. This allows the robot to be aware of their presence even if temporarily out of the field of view, thus providing a primary spatial model of the environment (as far as known objects are concerned). SIFT features are used, not only for recognizing previously learnt objects, but also to allow the robot to estimate their distance (depth perception). The humanoid platform used for the experiments was the iCub humanoid robot. This capability functions together with iCub's low-level attention system: recognized objects enact salience thus attracting the robot attention, by gazing at them, each one in turn. We claim that the presented approach is a contribution towards linking a bottom-up attention system with top-down cognitive information.