Increasing the effectiveness of automated assessment by increasing marking granularity and feedback units

  • Authors:
  • Nickolas Falkner;Rebecca Vivian;David Piper;Katrina Falkner

  • Affiliations:
  • The University of Adelaide, Adelaide, Australia;The University of Adelaide, Adelaide, Australia;The University of Adelaide, Adelaide, Australia;The University of Adelaide, Adelaide, Australia

  • Venue:
  • Proceedings of the 45th ACM technical symposium on Computer science education
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computer-based assessment is a useful tool for handling large-scale classes and is extensively used in the automated assessment of student programming assignments in Computer Science. The forms that this assessment takes, however, can vary widely from simple acknowledgement to a detailed analysis of output, structure and code. This study focusses on output analysis of submitted student assignment code and the degree to which changes in automated feedback influence student marks and persistence in submission. Data was collected over a four year period, over 22 courses but we focus on one course for this paper. Assignments were grouped by the number of different units of automated feedback that were delivered per assignment to investigate if students changed their submission behaviour or performance as the possible set of marks, that a student could achieve, changed. We discovered that pre-deadline results improved as the number of feedback units increase and that post-deadline activity was also improved as more feedback units were available.