Interfacing thought: cognitive aspects of human-computer interaction
Refining the test phase of usability evaluation: how many subjects is enough?
Human Factors - Special issue: measurement in human factors
A mathematical model of the finding of usability problems
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Computers and Biomedical Research
Comparative usability evaluation
Behaviour & Information Technology
Usability testing: what have we overlooked?
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Introducing item response theory for measuring usability inspection processes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Handbook of Usability TestingXXX: Howto Plan, Design, and Conduct Effective Tests
Handbook of Usability TestingXXX: Howto Plan, Design, and Conduct Effective Tests
Comparative usability evaluation (CUE-4)
Behaviour & Information Technology
Damaged merchandise? a review of experiments that compare usability evaluation methods
Human-Computer Interaction
Comparison of techniques for matching of usability problem descriptions
Interacting with Computers
A pattern-based usability inspection method: first empirical performance measures and future issues
BCS-HCI '07 Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI...but not as we know it - Volume 2
Heterogeneity in the usability evaluation process
BCS-HCI '08 Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction - Volume 1
Controlling the usability evaluation process under varying defect visibility
Proceedings of the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology
Number of people required for usability evaluation: the 10±2 rule
Communications of the ACM
Sample size in usability studies
Communications of the ACM
Hi-index | 0.00 |
Usability testing is recognized as an effective means to improve the usability of medical devices and prevent harm for patients and users. Effectiveness of problem discovery in usability testing strongly depends on size and representativeness of the sample. We introduce the late control strategy, which is to continuously monitor effectiveness of a study towards a preset target. A statistical model, the LNB"z"t model, is presented, supporting the late control strategy. We report on a case study, where a prototype medical infusion pump underwent a usability test with 34 users. On the data obtained in this study, the LNB"z"t model is evaluated and compared against earlier prediction models. The LNB"z"t model fits the data much better than previously suggested approaches and improves prediction. We measure the effectiveness of problem identification, and observe that it is lower than is suggested by much of the literature. Larger sample sizes seem to be in order. In addition, the testing process showed high levels of uncertainty and volatility at small to moderate sample sizes, partly due to users' individual differences. In reaction, we propose the idiosyncrasy score as a means to obtain representative samples. Statistical programs are provided to assist practitioners and researchers in applying the late control strategy.