A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Thresholding for Change Detection
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
A Survey of Outlier Detection Methodologies
Artificial Intelligence Review
Rapid Biologically-Inspired Scene Classification Using Features Shared with Visual Attention
IEEE Transactions on Pattern Analysis and Machine Intelligence
An attention-driven model for grouping similar images with image retrieval applications
EURASIP Journal on Applied Signal Processing
Biologically inspired mobile robot vision localization
IEEE Transactions on Robotics
Spatiotemporal Saliency in Dynamic Scenes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Two-frame motion estimation based on polynomial expansion
SCIA'03 Proceedings of the 13th Scandinavian conference on Image analysis
A generic framework of user attention model and its application in video summarization
IEEE Transactions on Multimedia
Automatic foveation for video compression using a neurobiological model of visual attention
IEEE Transactions on Image Processing
Statistical modeling of complex backgrounds for foreground object detection
IEEE Transactions on Image Processing
Salient Motion Features for Video Quality Assessment
IEEE Transactions on Image Processing
Eye-Tracking Database for a Set of Standard Video Sequences
IEEE Transactions on Image Processing
Predicting where we look from spatiotemporal gaps
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
Significant progress has been made in terms of computational models of bottom-up visual attention (saliency). However, efficient ways of comparing these models for still images remain an open research question. The problem is even more challenging when dealing with videos and dynamic saliency. The paper proposes a framework for dynamic-saliency model evaluation, based on a new database of diverse videos for which eye-tracking data has been collected. In addition, we present evaluation results obtained for 4 state-of-the-art dynamic-saliency models, two of which have not been verified on eye-tracking data before.