A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Coherent Computational Approach to Model Bottom-Up Visual Attention
IEEE Transactions on Pattern Analysis and Machine Intelligence
2006 Special Issue: Modeling attention to salient proto-objects
Neural Networks
Region-based visual attention analysis with its application in image browsing on small displays
Proceedings of the 15th international conference on Multimedia
Bottom-Up/Top-Down Image Parsing with Attribute Grammar
IEEE Transactions on Pattern Analysis and Machine Intelligence
Determining Patch Saliency Using Low-Level Context
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
Probabilistic Multi-Task Learning for Visual Saliency Estimation in Video
International Journal of Computer Vision
A Numerical Study of the Bottom-Up and Top-Down Inference Processes in And-Or Graphs
International Journal of Computer Vision
Global contrast based salient region detection
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Fast and Robust Generation of Feature Maps for Region-Based Visual Attention
IEEE Transactions on Image Processing
What are we looking for: Towards statistical modeling of saccadic eye movements and visual saliency
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Top-down visual saliency via joint CRF and dictionary learning
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Exploiting local and global patch rarities for saliency detection
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Salient Object Detection using concavity context
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Visual Saliency Based on Scale-Space Analysis in the Frequency Domain
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Visual saliency is a useful cue to locate the conspicuous image content. To estimate saliency, many approaches have been proposed to detect the unique or rare visual stimuli. However, such bottom-up solutions are often insufficient since the prior knowledge, which often indicates a biased selectivity on the input stimuli, is not taken into account. To solve this problem, this paper presents a novel approach to estimate image saliency by learning the prior knowledge. In our approach, the influences of the visual stimuli and the prior knowledge are jointly incorporated into a Bayesian framework. In this framework, the bottom-up saliency is calculated to pop-out the visual subsets that are probably salient, while the prior knowledge is used to recover the wrongly suppressed targets and inhibit the improperly popped-out distractors. Compared with existing approaches, the prior knowledge used in our approach, including the foreground prior and the correlation prior, is statistically learned from 9.6 million images in an unsupervised manner. Experimental results on two public benchmarks show that such statistical priors are effective to modulate the bottom-up saliency to achieve impressive improvements when compared with 10 state-of-the-art methods.