Refining the test phase of usability evaluation: how many subjects is enough?
Human Factors - Special issue: measurement in human factors
A mathematical model of the finding of usability problems
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
WebQuilt: a framework for capturing and visualizing the web experience
Proceedings of the 10th international conference on World Wide Web
WebQuilt: A proxy-based approach to remote web usability testing
ACM Transactions on Information Systems (TOIS)
NetRaker suite: a demonstration
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Usability in practice: formative usability evaluations - evolution and revolution
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Automatic capture, representation, and analysis of user behavior
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the second Nordic conference on Human-computer interaction
The "magic number 5": is it enough for web testing?
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Analysis of combinatorial user effect in international usability tests
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Comparative usability evaluation
Behaviour & Information Technology
Proceedings of the third Nordic conference on Human-computer interaction
Gauging adoptability: a case study of e-portfolio template development
Proceedings of the 33rd annual ACM SIGUCCS conference on User services
Usability benchmarking case study: media downloads via mobile phones in the US
Proceedings of the 8th conference on Human-computer interaction with mobile devices and services
Sample sizes for usability tests: mostly math, not magic
interactions - Waits & Measures
Heuristics for information visualization evaluation
Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization
Heuristic evaluation: Comparing ways of finding and reporting usability problems
Interacting with Computers
Functionality and usability in design for eStatements in eBanking services
Interacting with Computers
Usability testing: what have we overlooked?
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Introducing item response theory for measuring usability inspection processes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The anatomy of prototypes: Prototypes as filters, prototypes as manifestations of design ideas
ACM Transactions on Computer-Human Interaction (TOCHI)
Evaluating Information Visualizations
Information Visualization
Comparative usability evaluation (CUE-4)
Behaviour & Information Technology
Undo and erase events as indicators of usability problems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A user-tracing architecture for modeling interaction with the world wide web
Proceedings of the Working Conference on Advanced Visual Interfaces
HCD 09 Proceedings of the 1st International Conference on Human Centered Design: Held as Part of HCI International 2009
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: interaction design and usability
Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries
Sample size in usability studies
Communications of the ACM
Backtracking Events as Indicators of Usability Problems in Creation-Oriented Applications
ACM Transactions on Computer-Human Interaction (TOCHI)
Graphical passwords: Learning from the first twelve years
ACM Computing Surveys (CSUR)
Reviewing and Extending the Five-User Assumption: A Grounded Procedure for Interaction Evaluation
ACM Transactions on Computer-Human Interaction (TOCHI)
Hi-index | 0.02 |
We observed the same task executed by 49 users on four production web sites. We tracked the rates of discovery of new usability problems on each site and, using that data, estimated the total number of usability problems on each site and the number of tests we would need to discover every problem. Our findings differ sharply from rules-of-thumb derived from earlier work by Virzi[1] and Nielsen[2,3] commonly viewed as "industry standards." We found that the four sites we studied would need considerably more than five users to find 85% of the problems