Computational cognitive modeling of touch and gesture on mobile multitouch devices: applications and challenges for existing theory

  • Authors:
  • Kristen K. Greene;Franklin P. Tamborello;Ross J. Micheals

  • Affiliations:
  • National Institute of Standards and Technology, Gaithersburg, MD;Cogscent LLC, Washington, DC;National Institute of Standards and Technology, Gaithersburg, MD

  • Venue:
  • HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

As technology continues to evolve, so too must our modeling and simulation techniques. While formal engineering models of cognitive and perceptual-motor processes are well-developed and extensively validated in the traditional desktop computing environment, their application in the new mobile computing environment is far less mature. ACT-Touch, an extension of the ACT-R 6 (Adaptive Control of Thought-Rational) cognitive architecture, seeks to enable new methods for modeling touch and gesture in today's mobile computing environment. The current objective, the addition of new ACT-R interaction command vocabulary, is a critical first-step to support modeling users' multitouch gestural inputs with greater fidelity and precision. Immediate practical application and validation challenges are discussed, along with a proposed path forward for the larger modeling community to better measure, understand, and predict human performance in today's increasingly complex interaction landscape.