Automated grading assistance for student programs
SIGCSE '94 Proceedings of the twenty-fifth SIGCSE symposium on Computer science education
Constructivism in computer science education
SIGCSE '98 Proceedings of the twenty-ninth SIGCSE technical symposium on Computer science education
Computer science project work: principles and pragmatics
Computer science project work: principles and pragmatics
Automatic test-based assessment of programming: A review
Journal on Educational Resources in Computing (JERIC)
Comparing effective and ineffective behaviors of student programmers
ICER '09 Proceedings of the fifth international workshop on Computing education research workshop
Marking student programs using graph similarity
Computers & Education
Review of recent systems for automatic assessment of programming assignments
Proceedings of the 10th Koli Calling International Conference on Computing Education Research
A Motivation Guided Holistic Rehabilitation of the First Programming Course
ACM Transactions on Computing Education (TOCE)
What are we thinking when we grade programs?
Proceeding of the 44th ACM technical symposium on Computer science education
Hi-index | 0.00 |
Computer-based assessment is a useful tool for handling large-scale classes and is extensively used in the automated assessment of student programming assignments in Computer Science. The forms that this assessment takes, however, can vary widely from simple acknowledgement to a detailed analysis of output, structure and code. This study focusses on output analysis of submitted student assignment code and the degree to which changes in automated feedback influence student marks and persistence in submission. Data was collected over a four year period, over 22 courses but we focus on one course for this paper. Assignments were grouped by the number of different units of automated feedback that were delivered per assignment to investigate if students changed their submission behaviour or performance as the possible set of marks, that a student could achieve, changed. We discovered that pre-deadline results improved as the number of feedback units increase and that post-deadline activity was also improved as more feedback units were available.