Refining the test phase of usability evaluation: how many subjects is enough?
Human Factors - Special issue: measurement in human factors
A mathematical model of the finding of usability problems
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Comparative evaluation of usability tests
CHI '99 Extended Abstracts on Human Factors in Computing Systems
On the reliability of usability testing
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Testing web sites: five users is nowhere near enough
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Automatic support for web user studies with SCONE and TEA
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the third Nordic conference on Human-computer interaction
Heuristics for information visualization evaluation
Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization
Evaluating Information Visualizations
Information Visualization
Low cost prototyping: part 2, or how to apply the thinking-aloud method efficiently
BCS-HCI '08 Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction - Volume 2
An authoring tool for user generated mobile services
FIS'10 Proceedings of the Third future internet conference on Future internet
A usability test of whitelist and blacklist-based anti-phishing application
Proceeding of the 16th International Academic MindTrek Conference
Hi-index | 0.00 |
Common practice holds that 80% of usability findings are discovered after five participants. Recent findings from web testing indicate that a much larger number of participants is required to get results and that independent teams testing the same web-based product do not replicate results. How many users are enough for web testing?