Towards a Model of Information Seeking by Integrating Visual, Semantic and Memory Maps

  • Authors:
  • Myriam Chanceaux;Anne Guérin-Dugué;Benoît Lemaire;Thierry Baccino

  • Affiliations:
  • University of Grenoble, France;University of Grenoble, France;University of Grenoble, France;University of Nice-Sophia-Antipolis, France

  • Venue:
  • Cognitive Vision
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a threefold model of information seeking. A visual, a semantic and a memory map are dynamically computed in order to predict the location of the next fixation. This model is applied to a task in which the goal is to find among 40 words the one which best corresponds to a definition. Words have visual features and they are semantically organized. The model predicts scanpaths which are compared to human scanpaths on 3 high-level variables (number of fixations, average angle between saccades, rate of progression saccades). The best fit to human data is obtained when the memory map is given a strong weight and the semantic component a low weight.