Depth estimation during fixational head movements in a humanoid robot

  • Authors:
  • Marco Antonelli;Angel P. del Pobil;Michele Rucci

  • Affiliations:
  • Robotic Intelligence Lab, Universitat Jaume I, Castellón, Spain;Robotic Intelligence Lab, Universitat Jaume I, Castellón, Spain;Department of Psychology and Graduate Program in Neuroscience, Boston University, Boston, MA

  • Venue:
  • ICVS'13 Proceedings of the 9th international conference on Computer Vision Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Under natural viewing conditions, humans are not aware of continually performing small head and eye movements in the periods in between voluntary relocations of gaze. It has been recently shown that these fixational head movements provide useful depth information in the form of parallax. Here, we replicate this coordinated head and eye movements in a humanoid robot and describe a method for extracting the resulting depth information. Proprioceptive signals are interpreted by means of a kinematic model of the robot to compute the velocity of the camera. The resulting signal is then optimally integrated with the optic flow to estimate depth in the scene. We present the results of simulations which validate the proposed approach.