The reliability of analytic and holistic methods in rating students' computer programs
SIGCSE '88 Proceedings of the nineteenth SIGCSE technical symposium on Computer science education
The TRY system -or- how to avoid testing student programs
SIGCSE '89 Proceedings of the twentieth SIGCSE technical symposium on Computer science education
Assessment in computer science (abstract)
SIGCSE '94 Proceedings of the twenty-fifth SIGCSE symposium on Computer science education
An interactive lecture approach to teaching computer science
SIGCSE '95 Proceedings of the twenty-sixth SIGCSE technical symposium on Computer science education
ITiCSE '96 Proceedings of the 1st conference on Integrating technology into computer science education
Laboratory-style teaching of computer science
SIGCSE '90 Proceedings of the twenty-first SIGCSE technical symposium on Computer science education
Evaluating programming ability in an introductory computer science course
Proceedings of the thirty-first SIGCSE technical symposium on Computer science education
Making students read and review code
Proceedings of the 5th annual SIGCSE/SIGCUE ITiCSEconference on Innovation and technology in computer science education
Student well-being in a computing department
Proceedings of the 5th annual SIGCSE/SIGCUE ITiCSEconference on Innovation and technology in computer science education
A semi-automated approach to online assessment
Proceedings of the 5th annual SIGCSE/SIGCUE ITiCSEconference on Innovation and technology in computer science education
Experience with an automatically assessed course
Proceedings of the 5th annual SIGCSE/SIGCUE ITiCSEconference on Innovation and technology in computer science education
Have a great lab without needing roller skates
Proceedings of the 5th annual SIGCSE/SIGCUE ITiCSEconference on Innovation and technology in computer science education
ACM SIGCSE Bulletin
Automated feedback on programs means students need less help from teachers
Proceedings of the thirty-second SIGCSE technical symposium on Computer Science Education
Electronic peer review and peer grading in computer-science courses
Proceedings of the thirty-second SIGCSE technical symposium on Computer Science Education
Using lab practica to evaluate programming ability
Proceedings of the thirty-second SIGCSE technical symposium on Computer Science Education
Collaboration vs plagiarism in computer science programming courses
Proceedings of the thirty-second SIGCSE technical symposium on Computer Science Education
Accreditation and student assessment in distance education: why we all need to pay attention
Proceedings of the 6th annual conference on Innovation and technology in computer science education
Identifying topics for instructional improvement through on-line tracking of programming assignments
Proceedings of the 6th annual conference on Innovation and technology in computer science education
Using an effective grading method for preventing plagiarism of programming assignments
SIGCSE '82 Proceedings of the thirteenth SIGCSE technical symposium on Computer science education
An instructional aid for student programs
SIGCSE '80 Proceedings of the eleventh SIGCSE technical symposium on Computer science education
The teaching of documentation and good programming style in a liberal arts computer science program
SIGCSE '80 Proceedings of the eleventh SIGCSE technical symposium on Computer science education
In-person grading: an evaluative experiment
Proceedings of the 36th SIGCSE technical symposium on Computer science education
Peer assessment for action learning of data structures and algorithms
ACE '05 Proceedings of the 7th Australasian conference on Computing education - Volume 42
Misunderstandings about object-oriented design: experiences using code reviews
Proceedings of the 39th SIGCSE technical symposium on Computer science education
Hi-index | 0.00 |
We describe and advocate an approach to student program grading based on interactive program demonstration. Although one-on-one interaction is a hallmark of good teaching in many disciplines, the pressure of class loads and the availability of sophisticated automatic tools have steered some instructors away from close interaction with students in computer science. We feel that our approach has significant advantages over either more traditional ones (submission of work on paper for "off-line" grading) or newer ones based on automated tools. In particular, we feel that interactive demonstration offers a rich and compelling experience for instructors and students alike. We do not believe our approach is especially novel, but wish to recommend it to those who may never have considered it, and to promote it within a continuing dialogue about assessment techniques.Interactive demonstration has some disadvantages, of course, not the least of which is the difficulty of implementing it in settings with larger class sizes. Even in larger schools, however, the approach may be suitable for selected assignments, and most other drawbacks can be tempered by appropriate modifications to the basic framework.We begin with some general discussion of the goals and pitfalls of assessing student programs and mention some possible alternative approaches. We then describe the interactive demonstration style of grading as we have developed it. Next we consider the advantages and disadvantages of our approach and discuss ways to ameliorate the latter. Finally, we use our stated criteria to compare our approach with others and draw some summary conclusions.