Converting H.264-Derived Motion Information into Depth Map

  • Authors:
  • Mahsa T. Pourazad;Panos Nasiopoulos;Rabab K. Ward

  • Affiliations:
  • Electrical and Computer Engineering Department, University of British Columbia, Vancouver, Canada V6T 1Z4;Electrical and Computer Engineering Department, University of British Columbia, Vancouver, Canada V6T 1Z4;Electrical and Computer Engineering Department, University of British Columbia, Vancouver, Canada V6T 1Z4

  • Venue:
  • MMM '09 Proceedings of the 15th International Multimedia Modeling Conference on Advances in Multimedia Modeling
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

An efficient method that estimates the depth map of a 3D scene using the motion information of its H.264-encoded 2D video is presented. Our proposed method employs a revised version of the motion information. This is obtained based on the characteristics of the 3D human visual perception. The low complexity of our approach and its compatibility with future broadcasting networks allow its real-time implementation at the receiver, i.e. the 3D signal is delivered at no additional burden to the network. Performance evaluations show that our approach outperforms the other existing H.264-based technique by up to 1.5 dB PSNR i.e. it provides more realistic depth information of the scene. Moreover the subjective comparison of results (obtained by viewers watching the generated stereo video sequences on 3D display system) confirms the higher efficiency of our method.