A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
2006 Special Issue: Modeling attention to salient proto-objects
Neural Networks
Curious George: An attentive semantic robot
Robotics and Autonomous Systems
Computational visual attention systems and their cognitive foundations: A survey
ACM Transactions on Applied Perception (TAP)
Advertisement evaluation using visual saliency based on foveated image
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Biological plausibility of spectral domain approach for spatiotemporal visual saliency
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
IEEE Transactions on Image Processing
Image Signature: Highlighting Sparse Salient Regions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Predicting human gaze using quaternion DCT image signature saliency and face detection
WACV '12 Proceedings of the 2012 IEEE Workshop on the Applications of Computer Vision
Fast anisotropic Gauss filtering
IEEE Transactions on Image Processing
Hypercomplex Fourier Transforms of Color Images
IEEE Transactions on Image Processing
Context-Aware Saliency Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Saliency detection based on integrated features
Neurocomputing
Hi-index | 0.00 |
In recent years, several authors have reported that spectral saliency detection methods provide state-of-the-art performance in predicting human gaze in images (see, e.g., [1---3]). We systematically integrate and evaluate quaternion DCT- and FFT-based spectral saliency detection [3,4], weighted quaternion color space components [5], and the use of multiple resolutions [1]. Furthermore, we propose the use of the eigenaxes and eigenangles for spectral saliency models that are based on the quaternion Fourier transform. We demonstrate the outstanding performance on the Bruce-Tsotsos (Toronto), Judd (MIT), and Kootstra- Schomacker eye-tracking data sets.