Point-sampled 3D video of real-world scenes

  • Authors:
  • Michael Waschbüsch;Stephan Würmlin;Daniel Cotting;Markus Gross

  • Affiliations:
  • Computer Graphics Laboratory, ETH Zurich, Switzerland;Computer Graphics Laboratory, ETH Zurich, Switzerland;Computer Graphics Laboratory, ETH Zurich, Switzerland;Computer Graphics Laboratory, ETH Zurich, Switzerland

  • Venue:
  • Image Communication
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a point-sampled approach for capturing 3D video footage and subsequent re-rendering of real-world scenes. The acquisition system is composed of multiple sparsely placed 3D video bricks. The bricks contain a low-cost projector, two grayscale cameras and a high-resolution color camera. To improve on depth calculation we rely on structured light patterns. Texture images and pattern-augmented views of the scene are acquired simultaneously by time multiplexed projections of complementary patterns and synchronized camera exposures. High-resolution depth maps are extracted using depth-from-stereo algorithms performed on the acquired pattern images. The surface samples corresponding to the depth values are merged into a view-independent, point-based 3D data structure. This representation allows for efficient post-processing algorithms and leads to a high resulting rendering quality using enhanced probabilistic EWA volume splatting. In this paper, we focus on the 3D video acquisition system and necessary image and video processing techniques.