Multiple-description video coding using motion-compensated temporal prediction

  • Authors:
  • A. R. Reibman;H. Jafarkhani;Yao Wang;M. T. Orchard;R. Puri

  • Affiliations:
  • AT&T Labs-Research, Florham Park, NJ;-;-;-;-

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose multiple description (MD) video coders which use motion-compensated predictions. Our MD video coders utilize MD transform coding and three separate prediction paths at the encoder to mimic the three possible scenarios at the decoder: both descriptions received or either of the single descriptions received. We provide three different algorithms to control the mismatch between the prediction loops at the encoder and decoder. We present simulation results comparing the three approaches to two standards-based approaches to MD video coding. We show that when the main prediction loop at the encoder uses a two-channel reconstruction, it is important to have side prediction loops and transmit some redundancy information to control mismatch. We also examine the performance of our MD video coder with partial mismatch control in the presence of random packet loss, and demonstrate a significant improvement compared to more traditional approaches