In-game assessments increase novice programmers' engagement and level completion speed

  • Authors:
  • Michael J. Lee;Andrew J. Ko;Irwin Kwan

  • Affiliations:
  • University of Washington, Seattle, WA, USA;University of Washington, Seattle, WA, USA;Oregon State University, Corvallis, OR, USA

  • Venue:
  • Proceedings of the ninth annual international ACM conference on International computing education research
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Assessments have been shown to have positive effects on learning in compulsory educational settings. However, much less is known about their effects in discretionary learning settings, especially in computing education and educational games. We hypothesized that adding assessments to an educational computing game would provide extra opportunities for players to practice and correct misconceptions, thereby affecting their performance on subsequent levels and their motivation to continue playing. To test this, we designed a game called Gidget, in which players help a robot find and fix defects in programs that follow a mastery learning paradigm. Across two studies, we manipulated the inclusion of multiple choice and self-explanation assessment levels in the game, measuring their impact on engagement and level completion speed. In our first study, we found that including assessments caused learners to voluntarily play longer and complete more levels, suggesting increased engagement; in our second study, we found that including assessments caused learners to complete levels faster, suggesting increased understanding. These findings suggest that including assessments in a discretionary computing education game may be a key design strategy for improving informal learning of computing concepts.