A locally adaptive perceptual masking threshold model for image coding

  • Authors:
  • T. D. Tran;R. Safranek

  • Affiliations:
  • Dept. of Electr. & Comput. Eng., Wisconsin Univ., WI, USA;-

  • Venue:
  • ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 04
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper involves designing, implementing, and testing of a locally adaptive perceptual masking threshold model for image compression. This model computes, based on the contents of the original images, the maximum amount of noise energy that can be injected at each transform coefficient that results in perceptually distortion-free still images or sequences of images. The adaptive perceptual masking threshold model can be used as a pre-processor to a JPEG compression standard image coder. DCT coefficients less than their corresponding perceptual thresholds can be set to zero before the normal JPEG quantization and Huffman coding steps. The result is an image-dependent gain in the bit rate needed for transparent coding. In an informal subjective test involving 318 still images in the AT&T Bell Laboratory image database, this model provided a gain in bit-rate saving on the order of 10 to 30%.