A visual model for estimating the perceptual redundancy inherent in color images

  • Authors:
  • Chun-Hsien Chou;Kuo-Cheng Liu

  • Affiliations:
  • Department of Electrical Engineering, Tatung University, Taiwan;Department of Electrical Engineering, Tatung University, Taiwan

  • Venue:
  • PCM'04 Proceedings of the 5th Pacific Rim Conference on Advances in Multimedia Information Processing - Volume Part II
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

An efficient compression algorithm that is transparent to human visual perception is highly expected in representing high-quality color image as the resource of storage or transmission bandwidth is limited. Since the human eyes are not perfect sensors for discriminating color signals of small differences, there exist spaces of perceptual redundancy in color images. This paper presents a visual model for estimating perceptual redundancies of color images in terms of a triple of values, each for a color channel as a visibility threshold of distortion, or just noticeable difference (JND). The visual model is built on defining a perceptually indistinguishable region for each color in a color space by mapping colors, which are barely distinguishable from the target color in a perceptually uniform color space (PUCS), to the target color space. To justify the proposed color visual model, the estimated perceptual redundancy is utilized to improve the performance of JPEG-LS. Simulation results show that the perceptual performance in terms of the inspected visual quality and the amount of perceivable errors of the perceptually tuned JPEG-LS coder is superior to that of the un-tuned coder.