Comparison of techniques for matching of usability problem descriptions

  • Authors:
  • Kasper Hornbæk;Erik Frøkjær

  • Affiliations:
  • Department of Computer Science, University of Copenhagen, Universitetsparken 1, DK-2100 Copenhagen, Denmark;Department of Computer Science, University of Copenhagen, Universitetsparken 1, DK-2100 Copenhagen, Denmark

  • Venue:
  • Interacting with Computers
  • Year:
  • 2008

Quantified Score

Hi-index 0.02

Visualization

Abstract

Matching of usability problem descriptions consists of determining which problem descriptions are similar and which are not. In most comparisons of evaluation methods matching helps determine the overlap among methods and among evaluators. However, matching has received scant attention in usability research and may be fundamentally unreliable. We compare how 52 novice evaluators match the same set of problem descriptions from three think aloud studies. For matching the problem descriptions the evaluators use either (a) the similarity of solutions to the problems, (b) a prioritization effort for the owner of the application tested, (c) a model proposed by Lavery and colleagues [Lavery, D., Cockton, G., Atkinson, M.P., 1997. Comparison of evaluation methods using structured usability problem reports. Behaviour and Information Technology, 16 (4/5), 246-266], or (d) the User Action Framework [Andre, T.S., Hartson, H.R., Belz, S.M., McCreary, F.A., 2001. The user action framework: a reliable foundation for usability engineering support tools. International Journal of Human-Computer Studies, 54 (1), 107-136]. The resulting matches are different, both with respect to the number of problems grouped or identified as unique, and with respect to the content of the problem descriptions that were matched. Evaluators report different concerns and foci of attention when using the techniques. We illustrate how these differences among techniques might adversely influence the reliability of findings in usability research, and discuss some remedies.