Usability inspections by groups of specialists: perceived agreement in spite of disparate observations

  • Authors:
  • Morten Hertzum;Niels Ebbe Jacobsen;Rolf Molich

  • Affiliations:
  • Riso&slash/National Laboratory, Roskilde, Denmark;Nokia Mobile Phones, Copenhagen V, Denmark;DialogDesign, Stenlo&slash/se, Denmark

  • Venue:
  • CHI '02 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evaluators who examine the same system using the same usability evaluation method tend to report substantially different sets of problems. This so-called evaluator effect means that different evaluations point to considerably different revisions of the evaluated system. The first step in coping with the evaluator effect is to acknowledge its existence. In this study 11 usability specialists individually inspected a website and then met in four groups to combine their findings into group outputs. Although the overlap in reported problems between any two evaluators averaged only 9%, the 11 evaluators felt that they were largely in agreement. The evaluators perceived their disparate observations as mulitiple sources of evidence in support of the same issues, not as disagreements. Thus, the group work increased the evaluators' confidence in their individual inspections, rather than alerted them to the evaluator effect.