Hi-index | 0.00 |
As technology continues to evolve, so too must our modeling and simulation techniques. While formal engineering models of cognitive and perceptual-motor processes are well-developed and extensively validated in the traditional desktop computing environment, their application in the new mobile computing environment is far less mature. ACT-Touch, an extension of the ACT-R 6 (Adaptive Control of Thought-Rational) cognitive architecture, seeks to enable new methods for modeling touch and gesture in today's mobile computing environment. The current objective, the addition of new ACT-R interaction command vocabulary, is a critical first-step to support modeling users' multitouch gestural inputs with greater fidelity and precision. Immediate practical application and validation challenges are discussed, along with a proposed path forward for the larger modeling community to better measure, understand, and predict human performance in today's increasingly complex interaction landscape.