3D Scene Reconstruction from Multiple Spherical Stereo Pairs

  • Authors:
  • Hansung Kim;Adrian Hilton

  • Affiliations:
  • Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, Surrey, UK GU2 7XH;Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, Surrey, UK GU2 7XH

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a 3D environment modelling method using multiple pairs of high-resolution spherical images. Spherical images of a scene are captured using a rotating line scan camera. Reconstruction is based on stereo image pairs with a vertical displacement between camera views. A 3D mesh model for each pair of spherical images is reconstructed by stereo matching. For accurate surface reconstruction, we propose a PDE-based disparity estimation method which produces continuous depth fields with sharp depth discontinuities even in occluded and highly textured regions. A full environment model is constructed by fusion of partial reconstruction from spherical stereo pairs at multiple widely spaced locations. To avoid camera calibration steps for all camera locations, we calculate 3D rigid transforms between capture points using feature matching and register all meshes into a unified coordinate system. Finally a complete 3D model of the environment is generated by selecting the most reliable observations among overlapped surface measurements considering surface visibility, orientation and distance from the camera. We analyse the characteristics and behaviour of errors for spherical stereo imaging. Performance of the proposed algorithm is evaluated against ground-truth from the Middlebury stereo test bed and LIDAR scans. Results are also compared with conventional structure-from-motion algorithms. The final composite model is rendered from a wide range of viewpoints with high quality textures.