Motion-compensated DCT temporal filters for efficient spatio-temporal scalable video coding

  • Authors:
  • Randa Atta;Rawya Rizk;Mohammad Ghanbari

  • Affiliations:
  • Electrical Engineering Department, Suez Canal University, Egypt;Electrical Engineering Department, Suez Canal University, Egypt;School of Computer Science and Electronic Engineering, University of Essex, Essex, UK

  • Venue:
  • Image Communication
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although most of the proposals for implementing motion-compensated temporal filtering (MCTF) schemes are based on the wavelet transform, in this paper, we propose an MCTF framework based on the discrete cosine transform (DCT). Using DCT decimation and interpolation, several temporal decomposition structures named motion-compensated DCT temporal filters (MCDCT-TF) are introduced. These structures are able to employ filters of any length with particular emphasis on 5/3 DCT and 7/4 DCT. The proposed MCDCT-TF and the two-dimensional (2D) DCT decimation technique are incorporated into H.264/AVC to provide spatio-temporal scalability. Compared with the current MCTF-based lifting schemes such as Haar, and 5/3 wavelet filters, simulation results show that the proposed MCDCT-TF utilizing longer tap DCT filters achieves a significant improvement in coding gain. The impact of odd/even group of frames, the decimation/interpolation ratios, and motion-compensated connectivity on the MCDCT-TF performance are also analyzed. Moreover, simulation results show that the performance of the presented scalable video coding is close to the single layer H.264/AVC and is slightly inferior to the temporal scalability supported in JSVM, the state-of-the-art scalable video coding standard, that gets its gain from Hierarchical B-pictures. However, our spatio-temporal coding scheme outperforms the spatio-temporal supported in JSVM even if it uses hierarchical B-pictures to improve its gain.