View-spatial-temporal post-refinement for view synthesis in 3D video systems

  • Authors:
  • Linwei Zhu;Yun Zhang;Mei Yu;Gangyi Jiang;Sam Kwong

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • Image Communication
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Depth image based rendering is one of key techniques to realize view synthesis for three-dimensional television and free-viewpoint television, which provide high quality and immersive experiences to end viewers. However, artifacts of rendered images, including holes caused by occlusion/disclosure and boundary artifacts, may degrade the subjective and objective image quality. To handle these problems and improve the quality of rendered images, we present a novel view-spatial-temporal post-refinement method for view synthesis, in which new hole-filling and boundary artifact removal techniques are proposed. In addition, we propose an optimal reference frame selection algorithm for a better trade-off between the computational complexity and rendered image quality. Experimental results show that the proposed method can achieve a peak signal-to-noise ratio gain of 0.94dB on average for multiview video test sequences when compared with the benchmark view synthesis reference software. In addition, the subjective quality of the rendered image is also improved.