Real-time view synthesis from a sparse set of views

  • Authors:
  • George Chen;Yang Liu;Nelson Max

  • Affiliations:
  • SoarSpace Inc., 1103 Cowboys Pkwy, Irving, TX 75063, USA;Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550, USA;Department of Computer Science, University of California Davis, One Shields Avenue, Davis, CA 95616, USA

  • Venue:
  • Image Communication
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is known that the pure light field approach for view synthesis relies on a large number of image samples to produce anti-aliased renderings. Otherwise, the insufficiency of image sampling needs to be compensated for by geometry sampling. Currently, geometry estimation is done either offline or using dedicated hardware. Our solution to this dilemma is based on three key ideas: a formal analysis of the equivalency between light field rendering and plane-based warping, multi focus imaging in a multi camera system by plane sweeping, and the fusion of the multi focus image using multi view stereo. The essence of our method is to perform necessary depth estimation up to the level required by the minimal joint image-geometry sampling rate using off-the-shelf graphics hardware. As a result, real-time anti-aliased light field rendering is achieved even if the image samples are insufficient.