Automating the execution of student programs
ACM SIGCSE Bulletin
The TRY system -or- how to avoid testing student programs
SIGCSE '89 Proceedings of the twentieth SIGCSE technical symposium on Computer science education
Effective electronic marking for on-line assessment
ITiCSE '98 Proceedings of the 6th annual conference on the teaching of computing and the 3rd annual conference on Integrating technology into computer science education: Changing the delivery of computer science education
Sim: a utility for detecting similarity in computer programs
SIGCSE '99 The proceedings of the thirtieth SIGCSE technical symposium on Computer science education
On using the web as a collaboration space in the context of an industrial simulation
Proceedings of the 7th annual conference on Innovation and technology in computer science education
Interactive program demonstration as a form of student program assessment
Journal of Computing Sciences in Colleges
Developing intelligent programming tutors for novice programmers
ACM SIGCSE Bulletin
Peer testing in Software Engineering Projects
ACE '04 Proceedings of the Sixth Australasian Conference on Computing Education - Volume 30
A web-based tool for managing the submission of student work
Journal of Computing Sciences in Colleges
Proceedings of the 11th annual SIGCSE conference on Innovation and technology in computer science education
A learning system engineering approach to developing online courses
ACE '06 Proceedings of the 8th Australasian Conference on Computing Education - Volume 52
The Marmoset project: an automated snapshot, submission, and testing system
Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications
Finding failure-inducing changes in java programs using change classification
Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering
minimUML: A minimalist approach to UML diagramming for early computer science education
Journal on Educational Resources in Computing (JERIC)
Misunderstandings about object-oriented design: experiences using code reviews
Proceedings of the 39th SIGCSE technical symposium on Computer science education
Experiences in Hybrid Learning with eduComponents
ICHL '08 Proceedings of the 1st international conference on Hybrid Learning and Education
Peer review in CS2: conceptual learning
Proceedings of the 41st ACM technical symposium on Computer science education
An empirical analysis of team review approaches for teaching quality software development
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1
Empirical-WebGen: a web-based environment for the automatic generation of surveys and experiments
EASE'08 Proceedings of the 12th international conference on Evaluation and Assessment in Software Engineering
How should teaching modeling and programming intertwine?
Proceedings of the 8th edition of the Educators' Symposium
Assistance in computer programming learning using educational data mining and learning analytics
Proceedings of the 18th ACM conference on Innovation and technology in computer science education
Software verification and graph similarity for automated evaluation of students' assignments
Information and Software Technology
Teaching and learning programming and software engineering via interactive gaming
Proceedings of the 2013 International Conference on Software Engineering
Teaching operating systems using code review
Proceedings of the 45th ACM technical symposium on Computer science education
Active learning during lecture using tablets
Proceedings of the 45th ACM technical symposium on Computer science education
Hi-index | 0.00 |
The Praktomat system allows students to read, review, and assess each other's programs in order to improve quality and style. After a successful submission, the student can retrieve and review a program of some fellow student selected by Praktomat. After the review is complete, the student may obtain reviews and re-submit improved versions of his program. The reviewing process is independent of grading; the risk of plagiarism is narrowed by personalized assignments and automatic testing of submitted programs. In a survey, more than two thirds of the students affirmed that reading each other's programs improved their program quality; this is also confirmed by statistical data.