Evaluation of a multiscale color model for visual difference prediction

  • Authors:
  • P. George Lovell;C. Alejandro Párraga;Tom Troscianko;Caterina Ripamonti;David J. Tolhurst

  • Affiliations:
  • University of Bristol, Bristol, UK;University of Bristol, Bristol, UK;University of Bristol, Bristol, UK;University of Cambridge, Cambridge, UK;University of Cambridge, Cambridge, UK

  • Venue:
  • ACM Transactions on Applied Perception (TAP)
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

How different are two images when viewed by a human observer? There is a class of computational models which attempt to predict perceived differences between subtly different images. These are derived from theoretical considerations of human vision and are mostly validated from psychophysical experiments on stimuli, such as sinusoidal gratings. We are developing a model of visual difference prediction, based on multiscale analysis of local contrast, to be tested with psychophysical discrimination experiments on natural-scene stimuli. Here, we extend our model to account for differences in the chromatic domain by modeling differences in the luminance domain and in two opponent chromatic domains. We describe psychophysical measurements of objective (discrimination thresholds) and subjective (magnitude estimations) perceptual differences between visual stimuli derived from colored photographs of natural scenes. We use one set of psychophysical data to determine the best parameters for the model and then determine the extent to which the model generalizes to other experimental data. In particular, we show that the cues from different spatial scales and from the separate luminance and chromatic channels contribute roughly equally to discrimination and that these several cues are combined in a relatively straightforward manner. In general, the model provides good predictions of both threshold and suprathreshold image differences arising from a wide variety of geometrical and optical manipulations. This implies that models of this class can be generally useful in specifying how different two similar images will look to human observers.