Adaptive Learning Based View Synthesis Prediction for Multi-View Video Coding

  • Authors:
  • Jinhui Hu;Ruimin Hu;Zhongyuan Wang;Ge Gao;Mang Duan;Yan Gong

  • Affiliations:
  • National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China 430079;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China 430079;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China 430079;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China 430079;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China 430079;National Engineering Research Center for Multimedia Software, School of Computer, Wuhan University, Wuhan, China 430079

  • Venue:
  • Journal of Signal Processing Systems
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the applications of Free View TV, pre-estimated depth information is available to synthesize the intermediate views as well as to assist multi-view video coding. Existing view synthesis prediction schemes generate virtual view picture only from interview pictures. However, there are many types of signal mismatches caused by depth errors, camera heterogeneity or illumination difference across views and these mismatches decrease the prediction capability of virtual view picture. In this paper, we propose an adaptive learning based view synthesis prediction algorithm to enhance the prediction capability of virtual view picture. This algorithm integrates least square prediction with backward warping to synthesize the virtual view picture, which not only utilizes the adjacent views information but also the temporal decoded information to adaptively learn the prediction coefficients. Experiments show that the proposed method reduces the bitrates by up to 18聽% relative to the multi-view video coding standard, and about 11聽% relative to the conventional view synthesis prediction method.