Evaluating a new exam question: Parsons problems

  • Authors:
  • Paul Denny;Andrew Luxton-Reilly;Beth Simon

  • Affiliations:
  • The University of Auckland, Auckland, New Zealand;The University of Auckland, Auckland, New Zealand;University of California, San Diego, CA, USA

  • Venue:
  • ICER '08 Proceedings of the Fourth international Workshop on Computing Education Research
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Common exam practice centres around two question types: code tracing (reading) and code writing. It is commonly believed that code tracing is easier than code writing, but it seems obvious that different skills are needed for each. These problems also differ in their value on an exam. Pedagogically, code tracing on paper is an authentic task whereas code writing on paper is less so. Yet, few instructors are willing to forgo the code writing question on an exam. Is there another way, easier to grade, that captures the "problem solving through code creation process" we wish to examine? In this paper we propose Parson's puzzle-style problems for this purpose. We explore their potential both qualitatively, through interviews, and quantitatively through a set of CS1 exams. We find notable correlation between Parsons scores and code writing scores. We find low correlation between code writing and tracing and between Parsons and tracing. We also make the case that marks from a Parsons problem make clear what students don't know (specifically, in both syntax and logic) much less ambiguously than marks from a code writing problem. We make recommendations on the design of Parsons problems for the exam setting, discuss their potential uses and urge further investigations of Parsons problems for assessment of CS1 students.