Image saliency: From intrinsic to extrinsic context

  • Authors:
  • Meng Wang;J. Konrad;P. Ishwar;K. Jing;H. Rowley

  • Affiliations:
  • Dept. of Electr. & Comput. Eng., Boston Univ., Boston, MA, USA;Dept. of Electr. & Comput. Eng., Boston Univ., Boston, MA, USA;Dept. of Electr. & Comput. Eng., Boston Univ., Boston, MA, USA;Google Res., Mountain View, CA, USA;Google Res., Mountain View, CA, USA

  • Venue:
  • CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a novel framework for automatic saliency estimation in natural images. We consider saliency to be an anomaly with respect to a given context that can be global or local. In the case of global context, we estimate saliency in the whole image relative to a large dictionary of images. Unlike in some prior methods, this dictionary is not annotated, i.e., saliency is assumed unknown. In the case of local context, we partition the image into patches and estimate saliency in each patch relative to a large dictionary of un-annotated patches from the rest of the image. We propose a unified framework that applies to both cases in three steps. First, given an input (image or patch) we extract k nearest neighbors from the dictionary. Then, we geometrically warp each neighbor to match the input. Finally, we derive the saliency map from the mean absolute error between the input and all its warped neighbors. This algorithm is not only easy to implement but also outperforms state-of-the-art methods.