Comparative evaluation of usability tests
CHI '99 Extended Abstracts on Human Factors in Computing Systems
Usability in practice: formative usability evaluations - evolution and revolution
CHI '02 Extended Abstracts on Human Factors in Computing Systems
The "magic number 5": is it enough for web testing?
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Applying user testing data to UEM performance metrics
CHI '04 Extended Abstracts on Human Factors in Computing Systems
Comparative usability evaluation
Behaviour & Information Technology
Comparative usability evaluation (CUE-4)
Behaviour & Information Technology
Comparison of techniques for matching of usability problem descriptions
Interacting with Computers
International Journal of Human-Computer Studies
Weak inter-rater reliability in heuristic evaluation of video games
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Usability testing for serious games: making informed design decisions with user data
Advances in Human-Computer Interaction - Special issue on User Assessment in Serious Games and Technology-Enhanced Learning
Proceedings of the Biannual Conference of the Italian Chapter of SIGCHI
Hi-index | 0.00 |
Six professional usability testing teams conducted a usability test on an early prototype of a dialog box. Altogether, they identified 36 usability problems. No problem was detected by every team, 2 were found by five teams, 4 by four teams, 7 by three teams, 7 by two teams, and 18 problems were identified by one team only. There was more agreement among teams in this study compared to a previous study [1] and there was more agreement among the teams on severe vs. minor problems. Implications for the cooperation between usability testers and their clients are discussed.