Spatio-temporally Consistent Multi-view Video Synthesis for Autostereoscopic Displays

  • Authors:
  • Shu-Jyuan Lin;Chia-Ming Cheng;Shang-Hong Lai

  • Affiliations:
  • Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan;Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan;Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan

  • Venue:
  • PCM '09 Proceedings of the 10th Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a novel algorithm to generate multiple virtual views from a video-plus-depth sequence for modern autostereoscopic displays. To synthesize realistic content in the disocclusion regions from the virtual views is the main challenging problem in this task. In order to produce perceptually satisfactory images, our proposed algorithm takes advantage of spatial coherence and temporal consistency to handle the uncertain pixels in the disocclusion regions. On the one hand, regarding the spatial coherence, we incorporate the intensity gradient strength with the depth information to determine the filling priority for inpainting the disocclusion regions, so that the continuity of image structures can be preserved. On the other hand, the temporal consistency is enforced by considering the intensities in the disocclusion regions across the adjacent frames through an optimization process. We propose an iterative re-weighted framework to jointly consider intensity and depth consistency in the adjacent frames, which not only imposes temporal consistency but also reduces noise disturbance. Finally, for accelerating the multi-view synthesis process, we apply the proposed view synthesis algorithm to generate the images plus depth at the leftmost and rightmost viewpoints, so that the intermediate views are efficiently interpolated through image warping according to the associated depth maps between the two views. In the experimental validation, we perform quantitative evaluation on synthetic data as well as subjective assessment on real video data with comparisons to some previous representative methods to demonstrate the superior performance of the proposed method.