A framework for adaptive scalable video coding using Wyner-Ziv techniques

  • Authors:
  • Huisheng Wang;Ngai-Man Cheung;Antonio Ortega

  • Affiliations:
  • Integrated Media Systems Center and Department of Electrical Engineering, USC Viterbi School of Engineering, University of Southern California, Los Angeles, CA;Integrated Media Systems Center and Department of Electrical Engineering, USC Viterbi School of Engineering, University of Southern California, Los Angeles, CA;Integrated Media Systems Center and Department of Electrical Engineering, USC Viterbi School of Engineering, University of Southern California, Los Angeles, CA

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a practical video coding framework based on distributed source coding principles, with the goal to achieve efficient and low-complexity scalable coding. Starting from a standard predictive coder as base layer (such as MPEG-4 baseline video coder in our implementation), the proposed Wyner-Ziv scalable (WZS) coder can achieve higher coding efficiency, by selectively exploiting the high quality reconstruction of the previous frame in the enhancement layer coding of the current frame. This creates a multi-layer Wyner-Ziv prediction "link," connecting the same bitplane level between successive frames, thus providing improved temporal prediction as compared to MPEG-4 FGS, while keeping complexity reasonable at the encoder. Since the temporal correlation varies in time and space, a block-based adaptive mode selection algorithm is designed for each bitplane, so that it is possible to switch between different coding modes. Experimental results show improvements in coding efficiency of 3-4.5 dB over MPEG-4 FGS for video sequences with high temporal correlation.