Evaluating Children's Interactive Products: Principles and Practices for Interaction Designers
Evaluating Children's Interactive Products: Principles and Practices for Interaction Designers
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A comparison of benchmark task and insight evaluation methods for information visualization
Information Visualization - Special issue on Evaluation for Information Visualization
Participatory evaluation of an educational game for social skills acquisition
Computers & Education
Hi-index | 0.00 |
In this paper we describe a comparison of two analytical methods for educational computer games for young children. The methods compared in the study are the Structured Expert Evaluation Method (SEEM) and the Combined Heuristic Evaluation (HE) (based on a combination of Nielsen’s HE and the fun-related concepts from Malone and Lepper) with both usability and fun heuristics for children’s computer games. To verify SEEM’s relative quality, a study was set up in which adult evaluators predicted problems in computer games. Outcomes based on thoroughness (whether the analytical method finds all problems), validity (whether the analytical method uncovers problems that are likely to be true) and appropriateness (whether the method is applied correctly) are compared. The results show that both the thoroughness and validity of SEEM are higher than the thoroughness and validity of the Combined HE. The appropriateness scores indicate that SEEM gives evaluators more guidance when predicting problems than the Combined HE does.