A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Motion-aware temporal coherence for video resizing
ACM SIGGRAPH Asia 2009 papers
Biologically inspired mobile robot vision localization
IEEE Transactions on Robotics
Human detection using a mobile platform and novel features derived from a visual saliency mechanism
Image and Vision Computing
Hebbian-Based Neural Networks for Bottom-Up Visual Attention Systems
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part I
Saliency detection for content-aware image resizing
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
IEEE Transactions on Image Processing
Automatic interesting object extraction from images using complementary saliency maps
Proceedings of the international conference on Multimedia
Learning to Detect a Salient Object
IEEE Transactions on Pattern Analysis and Machine Intelligence
Image Signature: Highlighting Sparse Salient Regions
IEEE Transactions on Pattern Analysis and Machine Intelligence
A saliency detection model based on local and global kernel density estimation
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part I
Saliency estimation using a non-parametric low-level vision model
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Kernel Regression for Image Processing and Reconstruction
IEEE Transactions on Image Processing
Information Content Weighting for Perceptual Image Quality Assessment
IEEE Transactions on Image Processing
IEEE Transactions on Image Processing
Quaternion-Based spectral saliency detection for eye fixation prediction
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Hi-index | 0.01 |
This paper presents a novel computational model for saliency detection. The proposed model utilizes feature level fusion method to integrate different kinds of visual features. The integrated features are used to measure saliency, so no separate feature conspicuity maps, or the subsequent combination of them is needed in our model. Then, the new model combines the local and global measurements for estimating saliency (termed LGMES) by using local and global kernel density estimations during the saliency computation process. Experimental results on two human eye fixation datasets demonstrate that the proposed model outperforms the state-of-the-art methods. Meanwhile, the proposed saliency measurement is more efficient than those methods using separately local or global measurements.