Panoramic video coding using affine motion compensated prediction

  • Authors:
  • Zheng Jiali;Zhang Yongdong;Shen Yanfei;Ni Guangnan

  • Affiliations:
  • Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China;Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China

  • Venue:
  • MCAM'07 Proceedings of the 2007 international conference on Multimedia content analysis and mining
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes an affine motion compensated prediction (AMCP) method to predict the complex changes between the successive frames in panoramic video coding. A panoramic video is an image-based rendering (IBR) technique [1] which provides users with a large field of view (e.g. 360 degree) on surrounding dynamic scenes. It includes not only the translational motions but also the non-translational motions, such as zooming and rotation etc. However, the traditional motion compensated prediction is a translational motion compensated prediction (TMCP) which cannot predict nontranslational changes between panoramic images accurately. The AMCP can model the nontranslational motion effects of panoramic video accurately by using six motion coefficients which are estimated by Gauss Newton iterative minimization algorithm [2]. Simulated results show that the gain of coding performance is up to about 1.3 dB when using AMCP compared with TMCP in panoramic video coding.