Closing the loop in appearance-guided omnidirectional visual odometry by using vocabulary trees

  • Authors:
  • Davide Scaramuzza;Friedrich Fraundorfer;Marc Pollefeys

  • Affiliations:
  • Autonomous Systems Lab, ETH Zurich, Switzerland;Computer Vision and Geometry Group, ETH Zurich, Switzerland;Computer Vision and Geometry Group, ETH Zurich, Switzerland

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.02

Visualization

Abstract

In this paper, we present a method that allows us to recover the trajectory of a vehicle purely from monocular omnidirectional images very accurately. The method uses a combination of appearance-guided structure from motion and loop closing. The appearance-guided monocular structure-from-motion scheme is used for initial motion estimation. Appearance information is used to correct the rotation estimates computed from feature points only. A place recognition scheme is employed for loop detection, which works with a visual word based approach. Loop closing is done by bundle adjustment minimizing the reprojection error of feature matches. The proposed method is successfully demonstrated on videos from an automotive platform. The experiments show that the use of appearance information leads to superior motion estimates compared to a purely feature based approach. And we demonstrate a working loop closing method which eliminates the residual drift errors of the motion estimation.