Linear, Worst-Case Estimators for Denoising Quantization Noise in Transform Coded Images

  • Authors:
  • O. G. Guleryuz

  • Affiliations:
  • DoCoMo Commun. Labs., Palo Alto, CA

  • Venue:
  • IEEE Transactions on Image Processing
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

Transform-coded images exhibit distortions that fall outside of the assumptions of traditional denoising techniques. In this paper, we use tools from robust signal processing to construct linear, worst-case estimators for the denoising of transform compressed images. We show that while standard denoising is fundamentally determined by statistical models for images alone, the distortions induced by transform coding are heavily dependent on the structure of the transform used. Our method, thus, uses simple models for the image and for the quantization error, with the latter capturing the transform dependency. Based on these models, we derive optimal, linear estimators of the original image that are optimal in the mean-squared error sense for the worst-case cross correlation between the original and the quantization error. Our construction is transform agnostic and is applicable to transforms from block discrete cosine transforms to wavelets. Furthermore, our approach is applicable to different types of image statistics and can also serve as an optimization tool for the design of transforms/quantizers. Through the interaction of the source and quantizer models, our work provides useful insights and is instrumental in identifying and removing quantization artifacts from general signals coded with general transforms. As we decouple the modeling and processing steps, we allow for the construction of many different types of estimators depending on the desired sophistication and available computational complexity. In the low end of this spectrum, our lookup table based estimator, which can be deployed in low complexity environments, provides competitive PSNR values with some of the best results in the literature