Predictive human performance modeling made easy
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Predicting task execution time on handheld devices using the keystroke-level model
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the 33rd International Conference on Software Engineering
Complexity analysis: a quantitative approach to usability engineering
Proceedings of the 2011 Conference of the Center for Advanced Studies on Collaborative Research
Experiences with collaborative, distributed predictive human performance modeling
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Hi-index | 0.01 |
Predictive human performance modeling has traditionally been used to make quantitative comparisons between alternative designs (e.g., task execution time for skilled users) instead of identifying UI problems or making design recommendations. This note investigates how reliably novice modelers can extract design recommendations from their models. Many HCI evaluation methods have been plagued by the "evaluator effect" [3], i.e., different people using the same method find different UI problems. Our data and analyses show that predictive human performance modeling is no exception. Novice modelers using CogTool [5] display a 34% Any-Two Agreement in their design recommendations, a result in the upper quartile of evaluator effect studies. However, because these recommendations are grounded in models, they may have more reliable impact on measurable performance than recommendations arising from less formal methods.