The multiple-camera 3-D production studio

  • Authors:
  • Jonathan Starck;Atsuto Maki;Shohei Nobuhara;Adrian Hilton;Takashi Matsuyama

  • Affiliations:
  • Center for Vision, Speech, and Signal Processing, University of Surrey, Surrey, UK;Toshiba Research Europe, Ltd., Cambridge Research Laboratory, Cambridge, UK and Graduate School of Informatics, Kyoto University, Kyoto, Japan;Graduate School of Informatics, Kyoto University, Kyoto, Japan;Center for Vision, Speech, and Signal Processing, University of Surrey, Surrey, UK;Graduate School of Informatics, Kyoto University, Kyoto, Japan

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multiple-camera systems are currently widely used in research and development as a means of capturing and synthesizing realistic 3-D video content. Studio systems for 3-D production of human performance are reviewed from the literature, and the practical experience gained in developing prototype studios is reported across two research laboratories. System design should consider the studio backdrop for foreground matting, lighting for ambient illumination, camera acquisition hardware, the camera configuration for scene capture, and accurate geometric and photometric camera clilibration. A ground-truth evaluation is performed to quantify the effect of different constraints on the multiple-camera system in terms of geometric accuracy and the requirement for high-qulility view synthesis. As changing camera height has only a limited influence on surface visibility, multiple-camera sets or an active vision system may be required for wide area capture, and accurate reconstruction requires a camera baseline of 25°, and the achievable accuracy is 5-10-mm at current camera resolutions. Accuracy is inherently limited, and view-dependent rendering is required for view synthesis with sub-pixel accuracy where display resolutions match camera resolutions. The two prototype studios are contrasted and state-of-the-art techniques for 3-D content production demonstrated.