Visual Saliency with Statistical Priors

  • Authors:
  • Jia Li;Yonghong Tian;Tiejun Huang

  • Affiliations:
  • National Engineering Laboratory for Video Technology, School of EE & CS, Peking University, Beijing, China;National Engineering Laboratory for Video Technology, School of EE & CS, Peking University, Beijing, China;National Engineering Laboratory for Video Technology, School of EE & CS, Peking University, Beijing, China

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual saliency is a useful cue to locate the conspicuous image content. To estimate saliency, many approaches have been proposed to detect the unique or rare visual stimuli. However, such bottom-up solutions are often insufficient since the prior knowledge, which often indicates a biased selectivity on the input stimuli, is not taken into account. To solve this problem, this paper presents a novel approach to estimate image saliency by learning the prior knowledge. In our approach, the influences of the visual stimuli and the prior knowledge are jointly incorporated into a Bayesian framework. In this framework, the bottom-up saliency is calculated to pop-out the visual subsets that are probably salient, while the prior knowledge is used to recover the wrongly suppressed targets and inhibit the improperly popped-out distractors. Compared with existing approaches, the prior knowledge used in our approach, including the foreground prior and the correlation prior, is statistically learned from 9.6 million images in an unsupervised manner. Experimental results on two public benchmarks show that such statistical priors are effective to modulate the bottom-up saliency to achieve impressive improvements when compared with 10 state-of-the-art methods.