A comparison of two analytical evaluation methods for educational computer games for young children

  • Authors:
  • Mathilde M. Bekker;Ester Baauw;Wolmet Barendregt

  • Affiliations:
  • Eindhoven University of Technology, Department of Industrial Design, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands;Eindhoven University of Technology, Department of Industrial Design, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands;Eindhoven University of Technology, Department of Industrial Design, P.O. Box 513, 5600 MB, Eindhoven, The Netherlands

  • Venue:
  • Cognition, Technology and Work
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the fun-related concepts from Malone and Lepper) with both usability and fun heuristics for children’s computer games. To verify SEEM’s relative quality, a study was set up in which adult evaluators predicted problems in computer games. Outcomes based on thoroughness (whether the analytical method finds all problems), validity (whether the analytical method uncovers problems that are likely to be true) and appropriateness (whether the method is applied correctly) are compared. The results show that both the thoroughness and validity of SEEM are higher than the thoroughness and validity of the Combined HE. The appropriateness scores indicate that SEEM gives evaluators more guidance when predicting problems than the Combined HE does.