Integration of static and self-motion-based depth cues for efficient reaching and locomotor actions

  • Authors:
  • Beata J. Grzyb;Vicente Castelló;Marco Antonelli;Angel P. del Pobil

  • Affiliations:
  • Robotic Intelligence Lab, Jaume I University, Castellon, Spain;Robotic Intelligence Lab, Jaume I University, Castellon, Spain;Robotic Intelligence Lab, Jaume I University, Castellon, Spain;Robotic Intelligence Lab, Jaume I University, Castellon, Spain

  • Venue:
  • ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The common approach to estimate the distance of an object in computer vision and robotics is to use stereo vision. Stereopsis, however, provides good estimates only within near space and thus is more suitable for reaching actions. In order to successfully plan and execute an action in far space, other depth cues must be taken into account. Self-body movements, such as head and eye movements or locomotion can provide rich information of depth. This paper proposes a model for integration of static and self-motion-based depth cues for a humanoid robot. Our results show that self-motion-based visual cues improve the accuracy of distance perception and combined with other depth cues provide the robot with a robust distance estimator suitable for both reaching and walking actions.