Synthesis-in-the-loop for video texture coding

  • Authors:
  • Aleksandar Stojanovic;Mathias Wien;Thiow Keng Tan

  • Affiliations:
  • RWTH Aachen University, Aachen, Germany;RWTH Aachen University, Aachen, Germany;NTT DoCoMo, Inc., Tokyo, Japan

  • Venue:
  • ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we present an algorithm using dynamic texture synthesis for closed-loop video coding. Video textures, or so-called dynamic textures are video sequences with moving texture showing some stationarity properties over time, like water surfaces, whirlwind, clouds, crowds, or even parts of head-and-shoulder scenes. By learning the temporal statistics of such content, we can in principle synthesize the corresponding areas in future frames of the video. In this paper we show that this synthesized image content can also be used for prediction in a closed-loop hybrid video coding system, where the encoder decides about usage of such synthesized content and possible transmission of a residual error signal. This is done in an adaptive and rate-distortion optimized way, such that higher compression performance can be achieved for both high and low bitrates. We show that local adaptation of the algorithm can lead to better compression performance and reduce the computation complexity considerably. If PSNR is used as the quality criterion, savings of up to 15 % in bitrate have been observed experimentally.