A systematic approach for 2D-image to 3D-range registration in urban environments

  • Authors:
  • Lingyun Liu;Ioannis Stamos

  • Affiliations:
  • Google, Mountain View, CA 94043, United States;Hunter College/CUNY, New York, NY 10065, United States

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The photorealistic modeling of large-scale objects, such as urban scenes, requires the combination of range sensing technology and digital photography. In this paper, we attack the key problem of camera pose estimation, in an automatic and efficient way. First, the camera orientation is recovered by matching vanishing points (extracted from 2D images) with 3D directions (derived from a 3D range model). Then, a hypothesis-and-test algorithm computes the camera positions with respect to the 3D range model by matching corresponding 2D and 3D linear features. The camera positions are further optimized by minimizing a line-to-line distance. The advantage of our method over earlier work has to do with the fact that we do not need to rely on extracted planar facades, or other higher-order features; we are utilizing low-level linear features. That makes this method more general, robust, and efficient. We have also developed a user-interface for allowing users to accurately texture-map 2D images onto 3D range models at interactive rates. We have tested our system in a large variety of urban scenes.