A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Saliency, Scale and Image Description
International Journal of Computer Vision
A Goal Oriented Attention Guidance Model
BMCV '02 Proceedings of the Second International Workshop on Biologically Motivated Computer Vision
Models of bottom-up and top-down visual attention
Models of bottom-up and top-down visual attention
A Coherent Computational Approach to Model Bottom-Up Visual Attention
IEEE Transactions on Pattern Analysis and Machine Intelligence
VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search (Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence)
An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
A dynamic saliency attention model based on local complexity
Digital Signal Processing
Hi-index | 0.00 |
A biologically inspired object-based visual attention model is proposed in this paper. This model includes a training phase and an attention phase. In the training phase, all training targets are fused into a target class and all training backgrounds are fused into a background class. Weight vector is computed as the ratio of the mean target class saliency and the mean background class saliency for each feature. In the attention phase, for an attended scene, all feature maps are combined into a top-down salience map with the weight vector by a hierarchy method. Then, top-down and bottom-up salience map are fused into a global salience map which guides the visual attention. At last, the size of each salient region is obtained by maximizing entropy. The merit of our model is that it can attend a class target object which can appear in the corresponding background class. Experimental results indicate that: when the attended target object doesn't always appear in the background corresponding to that in the training images, our proposed model is excellent to Navalpakkam's model and the top-down approach of VOCUS.