Eye Movements in Visual Cognition: A Computational Study

  • Authors:
  • Rajesh P.N. Rao;Gregory J. Zelinsky;Mary M. Hayhoe;Dana H. Ballard

  • Affiliations:
  • -;-;-;-

  • Venue:
  • Eye Movements in Visual Cognition: A Computational Study
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual cognition depends critically on the moment-to-moment orientation of gaze. Gaze is changed by saccades, rapid eye movements that orient the fovea over targets of interest in a visual scene. Saccades are ballistic; a prespecified target location is computed prior to the movement and visual feedback is precluded. Once a target is fixated, gaze is typically held for about 300 milliseconds, although it can be held for both longer and shorter intervals. Despite these distinctive properties, there has been no specific computational model of the gaze targeting strategy employed by the human visual system during visual cognitive tasks. This paper proposes such a model that uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the target''s largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. Once fixated, targets are remembered by using spatial memory in the form of object-centered maps. The model was empirically tested by comparing its performance with actual eye movement data from human subjects in natural visual search tasks. Experimental results indicate excellent agreement between eye movements predicted by the model and those recorded from human subjects.