Testing a walkthrough methodology for theory-based design of walk-up-and-use interfaces
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Heuristic evaluation of user interfaces
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
User interface evaluation in the real world: a comparison of four techniques
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Finding usability problems through heuristic evaluation
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A VALIDATION OF ERGONOMIC CRITERIA FOR THE EVALUATION OF USER INTERFACES
ACM SIGCHI Bulletin
Usability testing vs. heuristic evaluation: was there a contest?
ACM SIGCHI Bulletin
Usability inspection methods after 15 years of research and practice
SIGDOC '07 Proceedings of the 25th annual ACM international conference on Design of communication
Game Usability Heuristics (PLAY) for Evaluating and Designing Better Games: The Next Iteration
OCSC '09 Proceedings of the 3d International Conference on Online Communities and Social Computing: Held as Part of HCI International 2009
The Damage Index: an aggregation tool for usability problem prioritisation
BCS '10 Proceedings of the 24th BCS Interaction Specialist Group Conference
Hi-index | 0.00 |
Traditional laboratory usability testing is frequently not performed due to a company's lack of funds, planning, or human factors expertise. Consequently, there is increasing interest in finding alternative usability testing methods that are easier and cheaper to implement than traditional laboratory usability testing. Recent studies are beginning to study and compare such techniques. These methods include Heuristic Evaluation (Nielsen and Molich, 1990), and Cognitive Walkthrough (Polson, Lewis, Rieman, & Wharton, 1990). For Heuristic Evaluation, Nielsen (1991) found that human-factors Experts were the best at finding an interface's usability problems, especially Experts who were also expert in the interface domain. Desurvire, Lawrence, & Atwood (1991) found Experts' evaluations were the most reliable, and their best guess predictions were predictive of laboratory performance. Karat, Campbell, & Fiegel (1992) similarly found that heuristic results were reliable and significantly predictive of laboratory data, yet empirical laboratory testing identified four and five times as many problems. Jeffries, Miller, Wharton, and Uyeda (1991) found that via Heuristic Evaluation, more severe problems were found than with laboratory testing or the Cognitive Walkthrough. This comparison study did, however, only utilize Experts in the heuristic condition, and System Designers in the Cognitive Walkthrough.