How to localize humanoids with a single camera?

  • Authors:
  • Pablo F. Alcantarilla;Olivier Stasse;Sebastien Druon;Luis M. Bergasa;Frank Dellaert

  • Affiliations:
  • ISIT-UMR 6284 CNRS, Université d'Auvergne, Clermont Ferrand, France;LAAS, CNRS, Toulouse, France;LIRMM, University Montpellier II, Montpellier, France;Department of Electronics, University of Alcalá, Madrid, Spain;School of Interactive Computing, Georgia Institute of Technology, Atlanta, USA 30332

  • Venue:
  • Autonomous Robots
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a real-time vision-based localization approach for humanoid robots using a single camera as the only sensor. In order to obtain an accurate localization of the robot, we first build an accurate 3D map of the environment. In the map computation process, we use stereo visual SLAM techniques based on non-linear least squares optimization methods (bundle adjustment). Once we have computed a 3D reconstruction of the environment, which comprises of a set of camera poses (keyframes) and a list of 3D points, we learn the visibility of the 3D points by exploiting all the geometric relationships between the camera poses and 3D map points involved in the reconstruction. Finally, we use the prior 3D map and the learned visibility prediction for monocular vision-based localization. Our algorithm is very efficient, easy to implement and more robust and accurate than existing approaches. By means of visibility prediction we predict for a query pose only the highly visible 3D points, thus, speeding up tremendously the data association between 3D map points and perceived 2D features in the image. In this way, we can solve very efficiently the Perspective-n-Point (PnP) problem providing robust and fast vision-based localization. We demonstrate the robustness and accuracy of our approach by showing several vision-based localization experiments with the HRP-2 humanoid robot.