The Learning of Plans in Programming: A Program Completion Approach
ICCE '02 Proceedings of the International Conference on Computers in Education
A multi-national study of reading and tracing skills in novice programmers
Working group reports from ITiCSE on Innovation and technology in computer science education
Not seeing the forest for the trees: novice programmers and the SOLO taxonomy
Proceedings of the 11th annual SIGCSE conference on Innovation and technology in computer science education
Parson's programming puzzles: a fun and effective learning tool for first programming courses
ACE '06 Proceedings of the 8th Australasian Conference on Computing Education - Volume 52
The teaching of novice computer programmers: bringing the scholarly-research approach to Australia
ACE '08 Proceedings of the tenth conference on Australasian computing education - Volume 78
Bloom's taxonomy for CS assessment
ACE '08 Proceedings of the tenth conference on Australasian computing education - Volume 78
Proceedings of the 14th Western Canadian Conference on Computing Education
RiTa: creativity support for computational literature
Proceedings of the seventh ACM conference on Creativity and cognition
Proceedings of the fifteenth annual conference on Innovation and technology in computer science education
Open source widget for parson's puzzles
Proceedings of the fifteenth annual conference on Innovation and technology in computer science education
Making sense of data structures exams
Proceedings of the Sixth international workshop on Computing education research
The BRACElet 2009.1 (Wellington) specification
ACE '09 Proceedings of the Eleventh Australasian Conference on Computing Education - Volume 95
An exploration of internal factors influencing student learning of programming
ACE '09 Proceedings of the Eleventh Australasian Conference on Computing Education - Volume 95
Surely we must learn to read before we learn to write!
ACE '09 Proceedings of the Eleventh Australasian Conference on Computing Education - Volume 95
Applying data structures in exams
Proceedings of the 42nd ACM technical symposium on Computer science education
Reviewing CS1 exam question content
Proceedings of the 42nd ACM technical symposium on Computer science education
An introduction to program comprehension for computer science educators
Proceedings of the 2010 ITiCSE working group reports
Stepping up to integrative questions on CS1 exams
Proceedings of the 43rd ACM technical symposium on Computer Science Education
The impact of question generation activities on performance
Proceedings of the 43rd ACM technical symposium on Computer Science Education
How do students solve parsons programming problems?: an analysis of interaction traces
Proceedings of the ninth annual international conference on International computing education research
A mobile learning application for parsons problems with automatic feedback
Proceedings of the 12th Koli Calling International Conference on Computing Education Research
CS circles: an in-browser python course for beginners
Proceeding of the 44th ACM technical symposium on Computer science education
A study of the influence of code-tracing problems on code-writing skills
Proceedings of the 18th ACM conference on Innovation and technology in computer science education
Can first-year students program yet?: a study revisited
Proceedings of the ninth annual international ACM conference on International computing education research
The use of code reading in teaching programming
Proceedings of the 13th Koli Calling International Conference on Computing Education Research
Hi-index | 0.00 |
Common exam practice centres around two question types: code tracing (reading) and code writing. It is commonly believed that code tracing is easier than code writing, but it seems obvious that different skills are needed for each. These problems also differ in their value on an exam. Pedagogically, code tracing on paper is an authentic task whereas code writing on paper is less so. Yet, few instructors are willing to forgo the code writing question on an exam. Is there another way, easier to grade, that captures the "problem solving through code creation process" we wish to examine? In this paper we propose Parson's puzzle-style problems for this purpose. We explore their potential both qualitatively, through interviews, and quantitatively through a set of CS1 exams. We find notable correlation between Parsons scores and code writing scores. We find low correlation between code writing and tracing and between Parsons and tracing. We also make the case that marks from a Parsons problem make clear what students don't know (specifically, in both syntax and logic) much less ambiguously than marks from a code writing problem. We make recommendations on the design of Parsons problems for the exam setting, discuss their potential uses and urge further investigations of Parsons problems for assessment of CS1 students.