The reliability of analytic and holistic methods in rating students' computer programs
SIGCSE '88 Proceedings of the nineteenth SIGCSE technical symposium on Computer science education
RoboProf and an introductory computer programming course
ITiCSE '99 Proceedings of the 4th annual SIGCSE/SIGCUE ITiCSE conference on Innovation and technology in computer science education
The Psychology of How Novices Learn Computer Programming
ACM Computing Surveys (CSUR)
The marking system for CourseMaster
Proceedings of the 7th annual conference on Innovation and technology in computer science education
Introductory programming, criterion-referencing, and bloom
SIGCSE '03 Proceedings of the 34th SIGCSE technical symposium on Computer science education
Working group reports from ITiCSE on Innovation and technology in computer science education
Automated assessment of GUI programs using JEWL
Proceedings of the 9th annual SIGCSE conference on Innovation and technology in computer science education
A multi-national study of reading and tracing skills in novice programmers
Working group reports from ITiCSE on Innovation and technology in computer science education
Synthesis and analysis of automatic assessment methods in CS1: generating intelligent MCQs
Proceedings of the 36th SIGCSE technical symposium on Computer science education
A closer look at tracing, explaining and code writing skills in the novice programmer
ICER '09 Proceedings of the fifth international workshop on Computing education research workshop
The BRACElet 2009.1 (Wellington) specification
ACE '09 Proceedings of the Eleventh Australasian Conference on Computing Education - Volume 95
Surely we must learn to read before we learn to write!
ACE '09 Proceedings of the Eleventh Australasian Conference on Computing Education - Volume 95
Concrete and other neo-Piagetian forms of reasoning in the novice programmer
ACE '11 Proceedings of the Thirteenth Australasian Computing Education Conference - Volume 114
ACE '12 Proceedings of the Fourteenth Australasian Computing Education Conference - Volume 123
Hi-index | 0.01 |
A system has been developed for providing automated assessment in CS1. During the academic year 2004- 2005 this system was evaluated empirically by examining a sample group of students using both the traditional assessment methods and also the automated techniques, four times during the year. A significant correlation was found between the performance in both tests, however the correlation was only strong for students who performed well during the year.To further this study, students were interviewed and asked their opinion on the generated questions. The students offered reasons for the variation in their performance and provided an insight into where the discrepancies lie. We discovered that weaker students were employing rote-learning and using it to score marks in the class exams.As this survey was conducted on paper, a large amount of student roughwork ("doodles") was collected, the analysis of this roughwork is also discussed.