View generation with 3D warping using depth information for FTV

  • Authors:
  • Yuji Mori;Norishige Fukushima;Tomohiro Yendo;Toshiaki Fujii;Masayuki Tanimoto

  • Affiliations:
  • Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan;Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan;Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan;Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan;Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan

  • Venue:
  • Image Communication
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a new method of depth-image-based rendering (DIBR) for free-viewpoint TV (FTV). In the conventional method, we estimated the depth of an object on the virtual image plane, which is called view-dependent depth estimation, and the virtual view images are rendered using the view-dependent depth map. In this method, virtual viewpoint images are rendered with 3D warping instead of estimating the view-dependent depth, since depth estimation is usually costly and it is desirable to eliminate it from the rendering process. However, 3D warping causes some problems that do not occur in the method with view-dependent depth estimation; for example, the appearance of holes on the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane. Depth discontinuity causes artifacts on the rendered image. In this paper, these problems are solved by projecting depth map to the virtual image plane and performing post-filtering on the projected depth map. In the experiments, high-quality arbitrary viewpoint images were obtained by rendering images from relatively small number of cameras.