A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Saliency, Scale and Image Description
International Journal of Computer Vision
Stylization and abstraction of photographs
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
A Coherent Computational Approach to Model Bottom-Up Visual Attention
IEEE Transactions on Pattern Analysis and Machine Intelligence
VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search (Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence)
Motion features to enhance scene segmentation in active visual attention
Pattern Recognition Letters
An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Visual surveillance by dynamic visual attention method
Pattern Recognition
Dynamic visual attention model in image sequences
Image and Vision Computing
Dynamic visual selective attention model
Neurocomputing
Improved seam carving for video retargeting
ACM SIGGRAPH 2008 papers
Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos
International Journal of Computer Vision
IEEE Transactions on Image Processing
A biologically inspired object-based visual attention model
Artificial Intelligence Review
Foveation scalable video coding with automatic fixation selection
IEEE Transactions on Image Processing
Hi-index | 0.00 |
A dynamic saliency attention model based on local complexity is proposed in this paper. Low-level visual features are extracted from current and some previous frames. Every feature map is resized into some different sizes. The feature maps in same size and same feature for all the frames are used to calculate a local complexity map. All the local complexity maps are normalized and are fused into a dynamic saliency map. In the same time, a static saliency map is acquired by the current frame. Then dynamic and static saliency maps are fused into a final saliency map. Experimental results indicate that: when there is noise among the frames or there is change of illumination among the frames, our model is excellent to Marat@?s model and Shi@?s model; when the moving objects do not belong to the static salient regions, our model is better than Ban@?s model.