Refining the test phase of usability evaluation: how many subjects is enough?
Human Factors - Special issue: measurement in human factors
A mathematical model of the finding of usability problems
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Comparative evaluation of usability tests
CHI '99 Extended Abstracts on Human Factors in Computing Systems
Testing web sites: five users is nowhere near enough
CHI '01 Extended Abstracts on Human Factors in Computing Systems
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Comparative usability evaluation
Behaviour & Information Technology
A comparative evaluation of heuristic-based usability inspection methods
CHI '08 Extended Abstracts on Human Factors in Computing Systems
Undo and erase events as indicators of usability problems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Heterogeneity in the usability evaluation process
BCS-HCI '08 Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction - Volume 1
HCD 09 Proceedings of the 1st International Conference on Human Centered Design: Held as Part of HCI International 2009
User testing when test tasks are not appropriate
European Conference on Cognitive Ergonomics: Designing beyond the Product --- Understanding Activity and User Experience in Ubiquitous Environments
Editorial: Modelling user experience - An agenda for research and practice
Interacting with Computers
The influence of the usage mode on subjectively perceived quality
IWSDS'10 Proceedings of the Second international conference on Spoken dialogue systems for ambient environments
Remote-evaluation of user interaction with WebGIS
W2GIS'11 Proceedings of the 10th international conference on Web and wireless geographical information systems
Feedlack detects missing feedback in web applications
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A case study of post-deployment user feedback triage
Proceedings of the 4th International Workshop on Cooperative and Human Aspects of Software Engineering
The effect of task assignments and instruction types on remote asynchronous usability testing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Backtracking Events as Indicators of Usability Problems in Creation-Oriented Applications
ACM Transactions on Computer-Human Interaction (TOCHI)
The case of the three usability tests: an experience report
Proceedings of the 30th ACM international conference on Design of communication
A usability test of whitelist and blacklist-based anti-phishing application
Proceeding of the 16th International Academic MindTrek Conference
Special Section on Touching the 3rd Dimension: Prototyping 3D haptic data visualizations
Computers and Graphics
Reviewing and Extending the Five-User Assumption: A Grounded Procedure for Interaction Evaluation
ACM Transactions on Computer-Human Interaction (TOCHI)
Journal of Biomedical Informatics
Is usability evaluation important: the perspective of novice software developers
BCS-HCI '13 Proceedings of the 27th International BCS Human Computer Interaction Conference
Hi-index | 0.01 |
For more than a decade, the number of usability test participants has been a major theme of debate among usability practitioners and researchers keen to improve usability test performance. This paper provides evidence suggesting that the focus be shifted to task coverage instead. Our data analysis of nine commercial usability test teams participating in the CUE-4 study revealed no significant correlation between the percentage of problems found or of new problems and number of test users, but correlations of both variables and number of user tasks used by each usability team were significant. The role of participant recruitment on usability test performance and future research directions are discussed.