The Role of Self-Calibration in Euclidean Reconstruction from Two Rotating and Zooming Cameras

  • Authors:
  • Eric Hayman;Lourdes de Agapito;Ian D. Reid;David W. Murray

  • Affiliations:
  • -;-;-;-

  • Venue:
  • ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part II
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reconstructing the scene from image sequences captured by moving cameras with varying intrinsic parameters is one of the major achievements of computer vision research in recent years. However, there remain gaps in the knowledge of what is reliably recoverable when the camera motion is constrained to move in particular ways. This paper considers the special case of multiple cameras whose optic centres are fixed in space, but which are allowed to rotate and zoom freely, an arrangement seen widely in practical applications. The analysis is restricted to two such cameras, although the methods are readily extended to more than two. As a starting point an initial self-calibration of each camera is obtained independently. The first contribution of this paper is to provide an analysis of near-ambiguities which commonly arise in the self-calibration of rotating cameras. Secondly we demonstrate how their effects may be mitigated by exploiting the epipolar geometry. Results on simulated and real data are presented to demonstrate how a number of self-calibration methods perform, including a final bundle-adjustment of all motion and structure parameters.