A tool for creating predictive performance models from user interface demonstrations

  • Authors:
  • Scott E. Hudson;Bonnie E. John;Keith Knudsen;Michael D. Byrne

  • Affiliations:
  • Human Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA;Human Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA;Human Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA;Human Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • Proceedings of the 12th annual ACM symposium on User interface software and technology
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

A central goal of many user interface development tools has been to make the construction of high quality interfaces easy enough that iterative design approaches could be a practical reality. In the last 15 years significant advances in this regard have been achieved. However, the evaluation portion of the iterative design process has received relatively little support from tools. Even though advances have also been made in usability evaluation methods, nearly all evaluation is still done “by hand,” making it more expensive and difficult than it might be. This paper considers a partial implementation of the CRITIQUE usability evaluation tool that is being developed to help remedy this situation by automating a number of evaluation tasks. This paper will consider techniques used by the system to produce predictive models (keystroke level models and simplified GOMS models) from demonstrations of sample tasks in a fraction of the time needed by conventional handcrafting methods. A preliminary comparison of automatically generated models with models created by an expert modeler show them to produce very similar predictions (within 2%). Further, because they are automated, these models promise to be less subject to human error and less affected by the skill of the modeler.