Joint rendering and segmentation of free-viewpoint video

  • Authors:
  • Masato Ishii;Keita Takahashi;Takeshi Naemura

  • Affiliations:
  • Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan;Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan;Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan

  • Venue:
  • Journal on Image and Video Processing - Special issue on multicamera information processing: acquisition, collaboration, interpretation, and production
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multiview video as the input. This method is designed to achieve robust segmentation from online video input without per-frame user interaction and precomputations. This method shares a calculation process between the synthesis and segmentation steps; the matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. In the experiments, our method can process online video input captured by a 25-camera array and show the result image at 4.55 fps.