The supplemental proceedings of the conference on Integrating technology into computer science education: working group reports and supplemental proceedings
Developing a digital library of computer science teaching resources
ACM SIGCSE Bulletin
ITiCSE-WGR '99 Working group reports from ITiCSE on Innovation and technology in computer science education
First year programming: let all the flowers bloom
ACE '03 Proceedings of the fifth Australasian conference on Computing education - Volume 20
ACE '03 Proceedings of the fifth Australasian conference on Computing education - Volume 20
Teaching Java first: experiments with a pigs-early pedagogy
ACE '04 Proceedings of the Sixth Australasian Conference on Computing Education - Volume 30
A multi-national study of reading and tracing skills in novice programmers
Working group reports from ITiCSE on Innovation and technology in computer science education
The carrick vision and computing education: four case studies in multi-institutional collaboration
ACE '07 Proceedings of the ninth Australasian conference on Computing education - Volume 66
After the gold rush: toward sustainable scholarship in computing
ACE '08 Proceedings of the tenth conference on Australasian computing education - Volume 78
Ten years of the Australasian Computing Education Conference
ACE '09 Proceedings of the Eleventh Australasian Conference on Computing Education - Volume 95
Explaining program code: giving students the answer helps - but only just
Proceedings of the seventh international workshop on Computing education research
Wrong is a relative concept: part marks for multiple-choice questions
ACE '11 Proceedings of the Thirteenth Australasian Computing Education Conference - Volume 114
Hi-index | 0.00 |
This paper presents a multiple choice question exam, used to test students completing their first semester of programming. Assumptions in the design of the exam are identified. A detailed analysis is performed on how students performed on the questions. The intent behind this exercise is to begin a community process of identifying the criteria that define an effective multiple-choice exam for testing novice programmers. The long term aim is to develop consensus on peer review criteria of such exams. This consensus is seen as a necessary precondition for any future public domain library of such multiple-choice questions