High-speed visual estimation using preattentive processing
ACM Transactions on Computer-Human Interaction (TOCHI)
Saliency, Scale and Image Description
International Journal of Computer Vision
ACM Transactions on Graphics (TOG)
Line Pattern Retrieval Using Relational Histograms
IEEE Transactions on Pattern Analysis and Machine Intelligence
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
On the Distribution of Saliency
IEEE Transactions on Pattern Analysis and Machine Intelligence
2006 Special Issue: Modeling attention to salient proto-objects
Neural Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Salient region detection by modeling distributions of color and orientation
IEEE Transactions on Multimedia
Esaliency (Extended Saliency): Meaningful Attention Using Stochastic Image Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
Kernel Entropy Component Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical motion model based on the change of feature relationships: human gait-based recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Human eye fixation points occurring during the early stages of visual processing often correspond to the loci of salient image regions. These salient regions provide us with assistance in determining the interesting parts of an image and they also lend support to our ability to discriminate between different objects in a scene. They attract our immediate attention without requiring an exhaustive scan of a scene and they possess some quality that enables them to stand out in relation to their neighbors. In this paper, we present a bottom-up measure of saliency based on the relationships exhibited among image features. We adopt the standpoint whereby the relationships among features determines more of the perceived structure in an image rather than the individual feature attributes and we seek those structures which 'pop-out.' We capture the organization within an image by employing relational distributions derived from distance and gradient direction relationships exhibited between image pixels. We demonstrate how our results coincide with human fixations. We also evaluate the performance of our measure in relation to a dominant saliency model and obtain comparable results. In an effort to derive meaningful information from an image, we investigate the significance of scale relative to our saliency measure.