Specifying ACT-R models of user interaction with a GOMS language

  • Authors:
  • Robert St. Amant;Andrew R. Freed;Frank E. Ritter

  • Affiliations:
  • Department of Computer Science, North Carolina State University, Raleigh, NC 27695, USA;-;School of Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16802, USA

  • Venue:
  • Cognitive Systems Research
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a system, G2A, that produces ACT-R models from GOMS models. The GOMS models can contain hierarchical methods, visual and memory stores, and control constructs. G2A allows ACT-R models to be built much more quickly, in hours rather than weeks. Because GOMS is a more abstract formalism than ACT-R, most GOMS operators can be plausibly translated in different ways into ACT-R productions (e.g., a GOMS Look-for operator can be carried out by different visual search strategies in ACT-R). Given a GOMS model, G2A generates and evaluates alternative ACT-R models by systematically varying the mapping of GOMS operators to ACT-R productions. In experiments with a text editing task, G2A produces ACT-R models whose predictions are within 5% of GOMS model predictions. In the same domain, G2A also generates ACT-R models that give better predictions than GOMS, providing good predictions of overall task duration for actual users (within 2%), though the models are less accurate at a detailed level. In a separate experiment with a mouse-driven telephone dialing task, G2A produces models that do a better job of distinguishing between competing interfaces than a Fitts' law model or an ACT-R model built by hand. G2A starts to describe the relationship between two major theories of cognition. This may have appeared a simple relationship, but the complexity of the translation illustrates why this was not done before. G2A shows a way forward for cognitive models, that of higher level languages that compile into more detailed specifications.