Do patterns help novice evaluators? A comparative study

  • Authors:
  • R. Lanzilotti;C. Ardito;M. F. Costabile;A. De Angeli

  • Affiliations:
  • Dipartimento di Informatica, Universití di Bari, Italy;Dipartimento di Informatica, Universití di Bari, Italy;Dipartimento di Informatica, Universití di Bari, Italy;Manchester Business School - The University of Manchester, UK and Department of Information Engineering and Computer Science, University of Trento, Italy

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evaluating e-learning systems is a complex activity which requires considerations of several criteria addressing quality in use as well as educational quality. Heuristic evaluation is a widespread method for usability evaluation, yet its output is often prone to subjective variability, primarily due to the generality of many heuristics. This paper presents the pattern-based (PB) inspection, which aims at reducing this drawback by exploiting a set of evaluation patterns to systematically drive inspectors in their evaluation activities. The application of PB inspection to the evaluation of e-learning systems is reported in this paper together with a study that compares this method to heuristic evaluation and user testing. The study involved 73 novice evaluators and 25 end users, who evaluated an e-learning application using one of the three techniques. The comparison metric was defined along six major dimensions, covering concepts of classical test theory and pragmatic aspects of usability evaluation. The study showed that evaluation patterns, capitalizing on the reuse of expert evaluators know-how, provide a systematic framework which reduces reliance on individual skills, increases inter-rater reliability and output standardization, permits the discovery of a larger set of different problems and decreases evaluation cost. Results also indicated that evaluation in general is strongly dependent on the methodological apparatus as well as on judgement bias and individual preferences of evaluators, providing support to the conceptualisation of interactive quality as a subjective judgement, recently brought forward by the UX research agenda.