No tests required: comparing traditional and dynamic predictors of programming success

  • Authors:
  • Christopher Watson;Frederick W.B. Li;Jamie L. Godwin

  • Affiliations:
  • University of Durham, Durham, United Kingdom;University of Durham, Durham, United Kingdom;University of Durham, Durham, United Kingdom

  • Venue:
  • Proceedings of the 45th ACM technical symposium on Computer science education
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Research over the past fifty years into predictors of programming performance has yielded little improvement in the identification of at-risk students. This is possibly because research to date is based upon using static tests, which fail to reflect changes in a student's learning progress over time. In this paper, the effectiveness of 38 traditional predictors of programming performance are compared to 12 new data-driven predictors, that are based upon analyzing directly logged data, describing the programming behavior of students. Whilst few strong correlations were found between the traditional predictors and performance, an abundance of strong significant correlations based upon programming behavior were found. A model based upon two of these metrics (Watwin score and percentage of lab time spent resolving errors) could explain 56.3% of the variance in coursework results. The implication of this study is that a student's programming behavior is one of the strongest indicators of their performance, and future work should continue to explore such predictors in different teaching contexts.