A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Segmentation of Color-Texture Regions in Images and Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
Contrast-based image attention analysis by using fuzzy growing
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Visual attention detection in video sequences using spatiotemporal cues
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Attention-driven image interpretation with application to image retrieval
Pattern Recognition
Selective Extraction of Visual Saliency Objects in Images and Videos
IIH-MSP '07 Proceedings of the Third International Conference on International Information Hiding and Multimedia Signal Processing (IIH-MSP 2007) - Volume 01
Detection of visual attention regions in images using robust subspace analysis
Journal of Visual Communication and Image Representation
Intelligent object extraction algorithm based on foreground/background classification
EUC'05 Proceedings of the 2005 international conference on Embedded and Ubiquitous Computing
A Rule Based Technique for Extraction of Visual Attention Regions Based on Real-Time Clustering
IEEE Transactions on Multimedia
Hi-index | 0.00 |
Visual attention detection is an important technique in many computer vision applications. In this paper, we propose an algorithm to extract a salient object from an image using bottom-up and top-down computations. In bottomup computation, segment-based color contrast and attention values are employed to compose a bottom-up saliency map. In top-down computation, in-focus areas of the image are extracted to derive attention values using wavelet transforms for constructing a segment-based top-down saliency map. Attention values from both maps are combined by linear combination. The foreground/background-based salient object extraction is applied to form an output object. Experiments on 1,200 color images show that the proposed algorithm yields high level of satisfaction.