Finding usability problems through heuristic evaluation
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Observing, predicting, and analyzing usability problems
Usability inspection methods
Evaluating a multimedia authoring tool
Journal of the American Society for Information Science - Special issue on current research in human-computer interaction
The evaluator effect in usability tests
CHI 98 Cconference Summary on Human Factors in Computing Systems
The user action framework: a reliable foundation for usability engineering support tools
International Journal of Human-Computer Studies
The Usability Problem Taxonomy: A Framework for Classificationand Analysis
Empirical Software Engineering
A Practical Guide to Usability Testing
A Practical Guide to Usability Testing
On the reliability of usability testing
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Heuristic evaluation of ambient displays
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Comparative usability evaluation
Behaviour & Information Technology
Supporting problem identification in usability evaluations
OZCHI '05 Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
What do usability evaluators do in practice?: an explorative study of think-aloud testing
DIS '06 Proceedings of the 6th conference on Designing Interactive systems
Comparative usability evaluation (CUE-4)
Behaviour & Information Technology
Controlling the usability evaluation process under varying defect visibility
Proceedings of the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology
Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries
Analysis in usability evaluations: an exploratory study
Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries
User experience to improve the usability of a vision-based interface
Interacting with Computers
Weak inter-rater reliability in heuristic evaluation of video games
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Sample size in usability studies
Communications of the ACM
Backtracking Events as Indicators of Usability Problems in Creation-Oriented Applications
ACM Transactions on Computer-Human Interaction (TOCHI)
Journal of Biomedical Informatics
Hi-index | 0.02 |
Matching of usability problem descriptions consists of determining which problem descriptions are similar and which are not. In most comparisons of evaluation methods matching helps determine the overlap among methods and among evaluators. However, matching has received scant attention in usability research and may be fundamentally unreliable. We compare how 52 novice evaluators match the same set of problem descriptions from three think aloud studies. For matching the problem descriptions the evaluators use either (a) the similarity of solutions to the problems, (b) a prioritization effort for the owner of the application tested, (c) a model proposed by Lavery and colleagues [Lavery, D., Cockton, G., Atkinson, M.P., 1997. Comparison of evaluation methods using structured usability problem reports. Behaviour and Information Technology, 16 (4/5), 246-266], or (d) the User Action Framework [Andre, T.S., Hartson, H.R., Belz, S.M., McCreary, F.A., 2001. The user action framework: a reliable foundation for usability engineering support tools. International Journal of Human-Computer Studies, 54 (1), 107-136]. The resulting matches are different, both with respect to the number of problems grouped or identified as unique, and with respect to the content of the problem descriptions that were matched. Evaluators report different concerns and foci of attention when using the techniques. We illustrate how these differences among techniques might adversely influence the reliability of findings in usability research, and discuss some remedies.