Learning where to look: location learning in graphical user interfaces

  • Authors:
  • Brian D. Ehret

  • Affiliations:
  • Sun Microsystems, Inc., Palo Alto, CA

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2002

Quantified Score

Hi-index 0.01

Visualization

Abstract

A theoretical account is presented on how locations of interface objects are learned and how the mechanisms underlying location learning interact with the representativeness of object labels. The account is embodied in a computational cognitive model built within the ACT-R/PM cognitive architecture [1, 2] and is supported by point-of-gaze and performance data collected in empirical research. The model interacts with the same software under the same experimental task conditions as study participants and replicates both performance and the finer-grained point-of-gaze data. Drawing from the data and model, location learning is characterized as a process that occurs as a by-product of interaction such that, without specific intent to do so, users can gradually learn the locations of the interface objects to which they attend. Characteristics of the user interface shape this learning process, however, by constraining the set of possible strategies for interaction. Locations are learned more quickly when the least-effortful strategy available in the interface explicitly requires retrieval of location knowledge