Designing the user interface: strategies for effective human-computer interaction
Designing the user interface: strategies for effective human-computer interaction
Improving a human-computer dialogue
Communications of the ACM
Refining the test phase of usability evaluation: how many subjects is enough?
Human Factors - Special issue: measurement in human factors
A mathematical model of the finding of usability problems
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Usability inspection methods
Statistical methods for speech recognition
Statistical methods for speech recognition
Foundations of statistical natural language processing
Foundations of statistical natural language processing
WebQuilt: A proxy-based approach to remote web usability testing
ACM Transactions on Information Systems (TOIS)
R and S-Plus Companion to Applied Regression
R and S-Plus Companion to Applied Regression
User Centered System Design; New Perspectives on Human-Computer Interaction
User Centered System Design; New Perspectives on Human-Computer Interaction
Testing web sites: five users is nowhere near enough
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Cost-Justifying Usability: An Update for the Internet Age
Cost-Justifying Usability: An Update for the Internet Age
Sample sizes for usability tests: mostly math, not magic
interactions - Waits & Measures
Usability testing: what have we overlooked?
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Heterogeneity in the usability evaluation process
BCS-HCI '08 Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction - Volume 1
Controlling the usability evaluation process under varying defect visibility
Proceedings of the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology
Concept of usability revisited
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: interaction design and usability
Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics
Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics
Sample size in usability studies
Communications of the ACM
Quantifying the User Experience: Practical Statistics for User Research
Quantifying the User Experience: Practical Statistics for User Research
Hi-index | 0.00 |
The debate concerning how many participants represents a sufficient number for interaction testing is well-established and long-running, with prominent contributions arguing that five users provide a good benchmark when seeking to discover interaction problems. We argue that adoption of five users in this context is often done with little understanding of the basis for, or implications of, the decision. We present an analysis of relevant research to clarify the meaning of the five-user assumption and to examine the way in which the original research that suggested it has been applied. This includes its blind adoption and application in some studies, and complaints about its inadequacies in others. We argue that the five-user assumption is often misunderstood, not only in the field of Human-Computer Interaction, but also in fields such as medical device design, or in business and information applications. The analysis that we present allows us to define a systematic approach for monitoring the sample discovery likelihood, in formative and summative evaluations, and for gathering information in order to make critical decisions during the interaction testing, while respecting the aim of the evaluation and allotted budget. This approach -- which we call the Grounded Procedure -- is introduced and its value argued.