A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 4 - Volume 4
"GrabCut": interactive foreground extraction using iterated graph cuts
ACM SIGGRAPH 2004 Papers
Visual attention detection in video sequences using spatiotemporal cues
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Determining Patch Saliency Using Low-Level Context
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
Sketch2Photo: internet image montage
ACM SIGGRAPH Asia 2009 papers
Is bottom-up attention useful for object recognition?
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Learning to Detect a Salient Object
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised extraction of visual attention objects in color images
IEEE Transactions on Circuits and Systems for Video Technology
Exploiting local and global patch rarities for saliency detection
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Saliency filters: Contrast based filtering for salient region detection
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Hi-index | 0.00 |
In this paper, we take the advantages of color contrast and color distribution to get high quality saliency maps. The overall procedure flow of our unified framework contains superpixel pre-segmentation, color contrast and color distribution computation, combination, final refinement and then object segmentation. During color contrast saliency computation, we combine two color systems and then introduce the using of distribution prior before saliency smoothing. It works to select correct color components. In addition, we propose a novel saliency smoothing procedure that is based on superpixel regions and is realized in color space. This processing step leads to total object being highlighted evenly, contributing to high quality color contrast saliency maps. Finally, a new refinement approach is utilized to eliminate artifacts and recover unconnected parts in the combined saliency maps. In visual comparison, our method produces higher quality saliency maps which stress out the total object meanwhile suppress background clutters. Both qualitative and quantitative experiments show our approach outperforms 8 state-of-the-art methods, achieving the highest precision rate 96% (3% improvement from the current highest), when evaluated via one of the most popular data sets [1]. Excellent content-aware image resizing also can be achieved with our saliency maps.