3-D Scene Data Recovery Using Omnidirectional Multibaseline Stereo

  • Authors:
  • Sing Bing Kang;Richard Szeliski

  • Affiliations:
  • Digital Equipment Corporation, Cambridge Research Lab, One Kendall Square, Bldg. 700, Cambridge, MA 02139. E-mail: sbk@crl.dec.com;Microsoft Corporation, One Microsoft Way, Redmond, WA 98052-6399. E-mail: szeliski@microsoft.com

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

A traditional approach to extracting geometric information from alarge scene is to compute multiple 3-D depth maps from stereo pairsor direct range finders, and then to merge the 3-D data. However, theresulting merged depth maps may be subject to merging errors if therelative poses between depth maps are not known exactly. In addition,the 3-D data may also have to be resampled before merging, which addsadditional complexity and potential sources of errors.This paper provides a means of directly extracting 3-D data coveringa very wide field of view, thus by-passing the need for numerousdepth map merging. In our work, cylindrical images are firstcomposited from sequences of images taken while the camera is rotated360° about a vertical axis. By taking such image panoramas atdifferent camera locations, we can recover 3-D data of the sceneusing a set of simple techniques: feature tracking, an 8-pointstructure from motion algorithm, and multibaseline stereo. We alsoinvestigate the effect of median filtering on the recovered 3-D pointdistributions, and show the results of our approach applied to bothsynthetic and real scenes.