Perceptually adaptive joint deringing-deblocking filtering for scalable video transmission over wireless networks

  • Authors:
  • Shuai Wan;Marta Mrak;Naeem Ramzan;Ebroul Izquierdo

  • Affiliations:
  • Multimedia and Vision Research Group, Queen Mary, University of London, Mile End Road, E1 4NS London, UK;Multimedia and Vision Research Group, Queen Mary, University of London, Mile End Road, E1 4NS London, UK;Multimedia and Vision Research Group, Queen Mary, University of London, Mile End Road, E1 4NS London, UK;Multimedia and Vision Research Group, Queen Mary, University of London, Mile End Road, E1 4NS London, UK

  • Venue:
  • Image Communication
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Video transmission over low bit-rate channels, such as wireless networks, requires dedicated filtering during decoding for crucial enhancement of the perceptual video quality. For that reason, deringing and deblocking are inevitable components of decoders in wireless video transmission systems. Aimed at improving the visual quality of decoded video, in this paper a new perceptually adaptive joint deringing-deblocking filtering technique for scalable video streams is introduced. The proposed approach is designed to deal with artefacts inherent to transmissions over very low bit-rate channels, specifically wireless networks. It considers both prediction and update steps in motion compensated temporal filtering in an in-loop filtering architecture. The proposed approach integrates three different filtering modules to deal with low-pass, high-pass and after-update frames, respectively. The filter strength is adaptively tuned according to the number of discarded bit-planes, which in turn depends on the channel bit-rate and the channel error conditions. Furthermore, since ringing and blocking artefacts are visually annoying, relevant characteristics of the human visual system are considered in the used bilateral filtering model. That is, the amount of filtering is adjusted to the perceptual distortion by integrating a human visual system model into filtering based on luminance, activity and temporal masking. As a consequence, the resulting filter strength is automatically adapted to both perceptual sensitivity and channel variation. To assess the performance of the proposed approach, a comprehensive comparative evaluation against the conventional loop architecture and bilateral filter was conducted. The results of the experimental evaluation show a superior performance of the proposed adaptive filtering approach, providing better objective and subjective quality.