Novice mistakes: are the folk wisdoms correct?
Communications of the ACM
Programming in Java: student-constructed rules
Proceedings of the thirty-first SIGCSE technical symposium on Computer science education
On blooming first year programming, and its blooming assessment
ACSE '00 Proceedings of the Australasian conference on Computing education
Objectives and objective assessment in CS1
Proceedings of the thirty-second SIGCSE technical symposium on Computer Science Education
Using genetic programming for the induction of novice procedural programming solution algorithms
Proceedings of the 2002 ACM symposium on Applied computing
Introductory programming, criterion-referencing, and bloom
SIGCSE '03 Proceedings of the 34th SIGCSE technical symposium on Computer science education
Identifying and correcting Java programming errors for introductory computer science students
SIGCSE '03 Proceedings of the 34th SIGCSE technical symposium on Computer science education
Grammatical Evolution: Evolving Programs for an Arbitrary Language
EuroGP '98 Proceedings of the First European Workshop on Genetic Programming
Assessing the assessment of programming ability
Proceedings of the 35th SIGCSE technical symposium on Computer science education
Managing large class assessment
ACE '04 Proceedings of the Sixth Australasian Conference on Computing Education - Volume 30
Snowman Psychology Applied To Teaching Plus Web Booklet For Packagestwelfth Edition
Snowman Psychology Applied To Teaching Plus Web Booklet For Packagestwelfth Edition
ACE '06 Proceedings of the 8th Australasian Conference on Computing Education - Volume 52
A singular choice for multiple choice
ITiCSE-WGR '06 Working group reports on ITiCSE on Innovation and technology in computer science education
Evaluation of e-learning systems based on fuzzy clustering models and statistical tools
Expert Systems with Applications: An International Journal
The quality of a PeerWise MCQ repository
Proceedings of the Twelfth Australasian Conference on Computing Education - Volume 103
Hi-index | 0.00 |
This paper describes the use of random code generation and mutation as a method for synthesising multiple choice questions which can be used in automated assessment. Whilst using multiple choice questions has proved to be a feasible method of testing if students have suitable knowledge or comprehension of a programming concept, creating suitable multiple choice questions that accurately test the students' knowledge is time intensive.This paper proposes two methods of generating code which can then be used to closely examine the comprehension ability of students. The first method takes as input a suite of template programs, and performs slight mutations on each program and ask students to comprehend the new program. The second method performs traversals on a syntax tree of possible programs, yielding slightly erratic but compilable code, again with behaviour that students can be questioned about. As well as generating code these methods also yield alternative distracting answers to challenge the students. Finally, this paper discusses the gradual introduction of these automatically generated questions as an assessment method and discusses the relative merits of each technique.