An integrated model of eye movements and visual encoding

  • Authors:
  • Dario D Salvucci

  • Affiliations:
  • Cambridge Basic Research, Four Cambridge Center, Cambridge, MA 02142, USA

  • Venue:
  • Cognitive Systems Research
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent computational models of cognition have made good progress in accounting for the visual processes needed to encode external stimuli. However, these models typically incorporate simplified models of visual processing that assume a constant encoding time for all visual objects and do not distinguish between eye movements and shifts of attention. This paper presents a domain-independent computational model, EMMA, that provides a more rigorous account of eye movements and visual encoding and their interaction with a cognitive processor. The visual-encoding component of the model describes the effects of frequency and foveal eccentricity when encoding visual objects as internal representations. The eye-movement component describes the temporal and spatial characteristics of eye movements as they arise from shifts of visual attention. When integrated with a cognitive model, EMMA generates quantitative predictions concerning when and where the eyes move, thus serving to relate higher-level cognitive processes and attention shifts with lower-level eye-movement behavior. The paper evaluates EMMA in three illustrative domains - equation solving, reading, and visual search - and demonstrates how the model accounts for aspects of behavior that simpler models of cognitive and visual processing fail to explain.