A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Coherent Computational Approach to Model Bottom-Up Visual Attention
IEEE Transactions on Pattern Analysis and Machine Intelligence
Optimal Cue Combination for Saliency Computation: A Comparison with Human Vision
IWINAC '07 Proceedings of the 2nd international work-conference on Nature Inspired Problem-Solving Methods in Knowledge Engineering: Interplay Between Natural and Artificial Computation, Part II
Saliency Based on Decorrelation and Distinctiveness of Local Responses
CAIP '09 Proceedings of the 13th International Conference on Computer Analysis of Images and Patterns
Spatiotemporal Saliency in Dynamic Scenes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image Signature: Highlighting Sparse Salient Regions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linear vs. nonlinear feature combination for saliency computation: a comparison with human vision
DAGM'06 Proceedings of the 28th conference on Pattern Recognition
State-of-the-Art in Visual Attention Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
There are many ''machine vision'' models of the visual saliency mechanism, which controls the process of selecting and allocating attention to the most ''prominent'' locations in the scene and helps humans interact with the visual environment efficiently (Itti and C. Koch, 2001; Gao et al., 2000). It is important to know which models perform the best in mimicking the saliency mechanism of the human visual system. There are several metrics to compare saliency models; however, results from different metrics vary widely in evaluating models. In this paper, a procedure is proposed for evaluating metrics for comparing saliency maps using a database of human fixations on approximately 1000 images. This procedure is then employed to identify the best metric. This best metric is then used to evaluate ten published bottom-up saliency models. An optimized level of the blurriness and center-bias is found for each visual saliency model. Performance of the models is also analyzed on a dataset of 54 synthetic images.