Depth-Based Disocclusion Filling for Virtual View Synthesis

  • Authors:
  • Ilkoo Ahn;Changick Kim

  • Affiliations:
  • -;-

  • Venue:
  • ICME '12 Proceedings of the 2012 IEEE International Conference on Multimedia and Expo
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Free-viewpoint rendering (FVR) has become a popular topic in 3D research. A promising technology in FVR is to generate virtual views using a single texture image and the corresponding depth image. A critical problem that occurs when generating virtual views is that the regions covered by the foreground objects in the original view may be disoccluded in the synthesized views. In this paper, a depth based disocclusion filling algorithm using patch based texture synthesis is proposed. In contrast to the existing patch based virtual view synthesis methods, the filling priority is driven by the robust structure tensor and the epipolar directional term. Moreover, the best-matched patch is searched in the background regions and finally the best-matched patch is chosen by considering the color similarity and some factors such as epipolar line and the magnitude of data term. Superiority of the proposed method over the existing methods is proved by comparing the experimental results.