Model-Based Joint Bit Allocation Between Texture Videos and Depth Maps for 3-D Video Coding

  • Authors:
  • Hui Yuan; Yilin Chang; Junyan Huo; Fuzheng Yang; Zhaoyang Lu

  • Affiliations:
  • Sch. of Inf. Sci. & Eng., Shandong Univ., Jinan, China;-;-;-;-

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In 3-D video coding, texture videos and depth maps need to be jointly coded. The distortion of texture videos and depth maps can be propagated to the synthesized virtual views. Besides coding efficiency of texture videos and depth maps, joint bit allocation between texture videos and depth maps is also an important research issue in 3-D video coding. First, we present comprehensive analyses on the impacts of the compression distortion of texture videos and depth maps on the quality of the virtual views, and then derive a concise distortion model for the synthesized virtual views. Based on this model, the joint bit allocation problem is formulated as a constrained optimization problem, and is solved by using the Lagrangian multiplier method. Experimental results demonstrate the high accuracy of the derived distortion model. Meanwhile, the rate-distortion (R-D) performance of the proposed algorithm is close to those of search-based algorithms which can give the best R-D performance, while the complexity of the proposed algorithm is lower than that of search-based algorithms. Moreover, compared with the bit allocation method using fixed texture and depth bits ratio (5:1), a maximum 1.2 dB gain can be achieved by the proposed algorithm.