Lightfield media production system using sparse angular sampling

  • Authors:
  • Frederik Zilly;Michael Schöberl;Peter Schäfer;Matthias Ziegler;Joachim Keinert;Siegfried Foessel

  • Affiliations:
  • Fraunhofer Institute for Integrated Circuits IIS;Fraunhofer Institute for Integrated Circuits IIS;Fraunhofer Institute for Integrated Circuits IIS;Fraunhofer Institute for Integrated Circuits IIS;Fraunhofer Institute for Integrated Circuits IIS;Fraunhofer Institute for Integrated Circuits IIS

  • Venue:
  • ACM SIGGRAPH 2013 Posters
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional film and broadcast cameras capture the scene from a single view point or in the case of 3D cameras from two slightly shifted viewpoints. Important creative parameters such as the camera position and orientation, the depth of field, and the amount of 3D parallax are burned into the footage during acquisition. To realize artistic effects such as the matrix- or the vertigo-effect, complex equipment and very skilled personnel is required. Within the former effect the scene itself appears frozen, while a camera movement is simulated by placing dozens of cameras in a mainly horizontal arrangement. The latter, however, requires physical movement of the camera which is usually mounted on a dolly and translates towards and backwards from the scene while the zoom (and focus) are changed accordingly. Beside the demanding requisites towards equipment and personnel, the resulting effects can usually not be changed in post-production. In contrast lightfield acquisition techniques allow for changing the mentioned parameters in post-production. Traditionally, in absence of a geometric model of the scene, a dense sampling of the lightfield is required. This can be achieved using large cameras arrays as used by [Wilburn et al. 2005] or hand-held plenoptic cameras as proposed by [Ng et al. 2005]. While the former approach is complex in calibration and operation due to the huge number of cameras, the latter suffers from a low resolution per view, as the total resolution of the imaging sensor needs to be shared between all sub-images captured by the individual micro-lenses.