Change-episodes in coding: when and how do programmers change their code?
Empirical studies of programmers: second workshop
Understanding and debugging novice programs
Artificial Intelligence - Special issue on artificial intelligence and learning environments
Studying the Novice Programmer
Studying the Novice Programmer
Assessing the assessment of programming ability
Proceedings of the 35th SIGCSE technical symposium on Computer science education
Affective and behavioral predictors of novice programmer achievement
ITiCSE '09 Proceedings of the 14th annual ACM SIGCSE conference on Innovation and technology in computer science education
Coarse-grained detection of student frustration in an introductory programming course
ICER '09 Proceedings of the fifth international workshop on Computing education research workshop
Measuring the effectiveness of error messages designed for novice programmers
Proceedings of the 42nd ACM technical symposium on Computer science education
Mind your language: on novices' interactions with error messages
Proceedings of the 10th SIGPLAN symposium on New ideas, new paradigms, and reflections on programming and software
Hi-index | 0.01 |
Traditional methods of evaluating student programs are not always appropriate for assessment of different instructional interventions. They tend to focus on the final product rather than on the process that led to it. This paper presents intention-based scoring (IBS), an approach to measuring programming ability that looks at intermediate programs produced over the course of an implementation rather than just the one at the end. The intent is to assess a student's ability to produce algorithmically correct code on the first attempt at achieving each program goal. In other words, the goal is to answer question "How close was the student to being initially correct?" but not to speak to a student's debugging skills or ability to ultimately produce a working program. To produce an IBS, it is necessary to inspect a student's online protocol, which is simply the collection of all programs submitted to a compiler. IBS involves a three-phase process of (1) identification of the subset of all programs in a protocol that represent the initial attempts at achieving programming goals, (2) bug identification, and (3) rubric-based scoring. We conclude with an example application of IBS in the evaluation of a tutoring system for beginning programmers and also show how an IBS can be broken down by the underlying bug categories to reveal more subtle differences.