Generic and real-time structure from motion using local bundle adjustment

  • Authors:
  • E. Mouragnon;M. Lhuillier;M. Dhome;F. Dekeyser;P. Sayd

  • Affiliations:
  • LASMEA UMR 6602, Université Blaise Pascal/CNRS, 24 Avenue des Landais, 63177 Aubière Cedex, France and Image and Embedded Computer Lab., CEA/LIST/DTSI/SARC, 91191 Gif-sur-Yvette Cedex, F ...;LASMEA UMR 6602, Université Blaise Pascal/CNRS, 24 Avenue des Landais, 63177 Aubière Cedex, France;LASMEA UMR 6602, Université Blaise Pascal/CNRS, 24 Avenue des Landais, 63177 Aubière Cedex, France;Image and Embedded Computer Lab., CEA/LIST/DTSI/SARC, 91191 Gif-sur-Yvette Cedex, France;Image and Embedded Computer Lab., CEA/LIST/DTSI/SARC, 91191 Gif-sur-Yvette Cedex, France

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a method for estimating the motion of a calibrated camera and the three-dimensional geometry of the filmed environment. The only data used is video input. Interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in real-time, key frames are selected to enable 3D reconstruction of the features. We introduce a local bundle adjustment allowing 3D points and camera poses to be refined simultaneously through the sequence. This significantly reduces computational complexity when compared with global bundle adjustment. This method is applied initially to a perspective camera model, then extended to a generic camera model to describe most existing kinds of cameras. Experiments performed using real-world data provide evaluations of the speed and robustness of the method. Results are compared to the ground truth measured with a differential GPS. The generalized method is also evaluated experimentally, using three types of calibrated cameras: stereo rig, perspective and catadioptric.