On the Use of Gaze Information and Saliency Maps for Measuring Perceptual Contrast
SCIA '09 Proceedings of the 16th Scandinavian Conference on Image Analysis
An Approach to the Parameterization of Structure for Fast Categorization
International Journal of Computer Vision
Motion tuned spatio-temporal quality assessment of natural videos
IEEE Transactions on Image Processing
Natural scene statistics at stereo fixations
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
A vector-based, multidimensional scanpath similarity measure
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications
Foveated mean squared error--a novel video quality metric
Multimedia Tools and Applications
Study of subjective and objective quality assessment of video
IEEE Transactions on Image Processing
Automatic prediction of perceptual quality of multimedia signals--a survey
Multimedia Tools and Applications
Measuring perceptual contrast in digital images
Journal of Visual Communication and Image Representation
Spatial pooling for measuring color printing quality attributes
Journal of Visual Communication and Image Representation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Visual saliency detection using information divergence
Pattern Recognition
Hi-index | 0.01 |
The ability to automatically detect visually interesting regions in images has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems. Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, we studied the statistics of four low-level local image features: luminance, contrast, and bandpass outputs of both luminance and contrast, and discovered that image patches around human fixations had, on average, higher values of each of these features than image patches selected at random. Contrast-bandpass showed the greatest difference between human and random fixations, followed by luminance-bandpass, RMS contrast, and luminance. Using these measurements, we present a new algorithm that selects image regions as likely candidates for fixation. These regions are shown to correlate well with fixations recorded from human observers.