Image and video saliency models improvement by blur identification

  • Authors:
  • Yoann Baveye;Fabrice Urban;Christel Chamaret

  • Affiliations:
  • Technicolor, Cesson Sevigne, France;Technicolor, Cesson Sevigne, France;Technicolor, Cesson Sevigne, France

  • Venue:
  • ICCVG'12 Proceedings of the 2012 international conference on Computer Vision and Graphics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual saliency models aim at predicting where people look. In free viewing conditions, people look at relevant objects that are in focus. Assuming blurred or out-of-focus objects do not belong to the region of interest, this paper proposes a significant improvement and the validation of a saliency model by taking blur into account. Blur identification is associated to a spatio-temporal saliency model. Bottom-up models are designed to mimic the low-level processing of the human visual system and can thus detect out-of-focus objects as salient. The blur identification allows decreasing saliency values on blurred areas while increasing values on sharp areas. In order to validate our new saliency model we conducted eye-tracking experiments to record ground truth of observer's fixations on images and videos. Blur identification significantly improves fixation prediction for natural images and videos.