Perceptual-based distributed video coding

  • Authors:
  • Yu-Chen Sun;Chun-Jen Tsai

  • Affiliations:
  • Dept. of Computer Science, National Chiao Tung University, Hsinchu 30010, Taiwan;Dept. of Computer Science, National Chiao Tung University, Hsinchu 30010, Taiwan

  • Venue:
  • Journal of Visual Communication and Image Representation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a perceptual-based distributed video coding (DVC) technique. Unlike traditional video codecs, DVC applies video prediction process at the decoder side using previously received frames. The predicted video frames (i.e., side information) contain prediction errors. The encoder then transmits error-correcting parity bits to the decoder to reconstruct the video frames from side information. However, channel codes based on i.i.d. noise models are not always efficient in correcting video prediction errors. In addition, some of the prediction errors do not cause perceptible visual distortions. From perceptual coding point of view, there is no need to correct such errors. This paper proposes a scheme for the decoder to perform perceptual quality analysis on the predicted side information. The decoder only requests parity bits to correct visually sensitive errors. More importantly, with the proposed technique, key frames can be encoded at higher rates while still maintaining consistent visual quality across the video sequence. As a result, even the objective PSNR measure of the decoded video sequence will increase too. Experimental results show that the proposed technique improves the R-D performance of a transform domain DVC codec both subjectively and objectively. Comparisons with a well-known DVC codec show that the proposed perceptual-based DVC coding scheme is very promising for distributed video coding framework.