Human Learning of Contextual Priors for Object Search: Where does the time go?

  • Authors:
  • Barbara Hidalgo-Sotelo;Aude Oliva;Antonio Torralba

  • Affiliations:
  • Department of Brain and Cognitive Sciences, MIT;Department of Brain and Cognitive Sciences, MIT;Computer Science and Arti.cial Intelligence Laboratory, MIT

  • Venue:
  • CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Attention allocation in visual search is known to be influenced by low-level image features, visual scene context and top down task constraints. Here, we investigate the role of Contextual priors in guiding visual search by monitoring eye movements as participants search very familiar scenes for a target object. The goal of the study is to identify which stage of the visual search benefits from contextual priors. Two groups of participants differed in the expectation of target presence associated with a scene. Stronger priors are established when a scene exemplar is always associated with the presence of the target than when the scene is periodically observed with and without the target. In both cases, overall search performance improves over repeated presentations of scenes. An analytic decomposition of the time course of the effect of contextual priors shows a time benefit to the exploration stage of search (scan time) and a decrease in gaze duration on the target. The strength of the contextual relationship modulates the magnitude of gaze duration gain, while the scan time gain constitutes one half of the overall search performance benefit regardless of the probability (50% or 100%) of target presence. These data are discussed in terms of the implications of contextdependent scene processing and its putative role in various stages of visual search.