A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Robust Scene Categorization by Learning Image Statistics in Context
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
What Is a Good Image Segment? A Unified Approach to Segment Extraction
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
Some Objects Are More Equal Than Others: Measuring and Predicting Importance
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Toward a higher-level visual representation for object-based image retrieval
The Visual Computer: International Journal of Computer Graphics
International Journal of Computer Vision
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Real-time bag of words, approximately
Proceedings of the ACM International Conference on Image and Video Retrieval
Making computers look the way we look: exploiting visual attention for image understanding
Proceedings of the international conference on Multimedia
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Eye-tracking methodology and applications to images and video
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Saliency from hierarchical adaptation through decorrelation and variance normalization
Image and Vision Computing
Proceedings of the Symposium on Eye Tracking Research and Applications
Incorporating visual field characteristics into a saliency map
Proceedings of the Symposium on Eye Tracking Research and Applications
Proceedings of the 20th ACM international conference on Multimedia
Proceedings of the 20th ACM international conference on Multimedia
Depth matters: influence of depth cues on visual saliency
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Geodesic saliency using background priors
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Learning saliency-based visual attention: A review
Signal Processing
Graph-based joint clustering of fixations and visual entities
ACM Transactions on Applied Perception (TAP)
Visual saliency detection using information divergence
Pattern Recognition
Static saliency vs. dynamic saliency: a comparative study
Proceedings of the 21st ACM international conference on Multimedia
Stochastic bottom-up fixation prediction and saccade generation
Image and Vision Computing
Letters: Background contrast based salient region detection
Neurocomputing
Hi-index | 0.00 |
To learn the preferential visual attention given by humans to specific image content, we present NUSEF- an eye fixation database compiled from a pool of 758 images and 75 subjects. Eye fixations are an excellent modality to learn semantics-driven human understanding of images, which is vastly different from feature-driven approaches employed by saliency computation algorithms. The database comprises fixation patterns acquired using an eye-tracker, as subjects free-viewed images corresponding to many semantic categories such as faces (human and mammal), nudes and actions (look, read and shoot). The consistent presence of fixation clusters around specific image regions confirms that visual attention is not subjective, but is directed towards salient objects and object-interactions. We then show how the fixation clusters can be exploited for enhancing image understanding, by using our eye fixation database in an active image segmentation application. Apart from proposing a mechanism to automatically determine characteristic fixation seeds for segmentation, we show that the use of fixation seeds generated from multiple fixation clusters on the salient object can lead to a 10% improvement in segmentation performance over the state-of-the-art.