The TRY system -or- how to avoid testing student programs
SIGCSE '89 Proceedings of the twentieth SIGCSE technical symposium on Computer science education
Cheating and plagiarism: perceptions and practices of first year IT students
Proceedings of the 7th annual conference on Innovation and technology in computer science education
Mooshak: a Web-based multi-site programming contest system
Software—Practice & Experience
Winnowing: local algorithms for document fingerprinting
Proceedings of the 2003 ACM SIGMOD international conference on Management of data
The CourseMarker CBA System: Improvements over Ceilidh
Education and Information Technologies
Improving student performance by evaluating how well students test their own programs
Journal on Educational Resources in Computing (JERIC)
Preface to the special issue on automated assessment of programming assignments
Journal on Educational Resources in Computing (JERIC)
Automatic test-based assessment of programming: A review
Journal on Educational Resources in Computing (JERIC)
A course on algorithms and data structures using on-line judging
ITiCSE '09 Proceedings of the 14th annual ACM SIGCSE conference on Innovation and technology in computer science education
An Experience on Ada Programming Using On-Line Judging
Ada-Europe '09 Proceedings of the 14th Ada-Europe International Conference on Reliable Software Technologies
Review of recent systems for automatic assessment of programming assignments
Proceedings of the 10th Koli Calling International Conference on Computing Education Research
An automated feedback system for computer organization projects
IEEE Transactions on Education
Hi-index | 0.00 |
Automated assessment systems are gaining popularity within computer programming courses. In this paper we perform an empirical evaluation of Mooshak, an online judge that verifies program correctness, in order to determine its usefulness in classroom settings. In particular, we provide a detailed study on how students use the tool, analyze their opinions and critiques about it, and measure other features like its capability to reduce dropout rates. The experience was carried out within a course on algorithm design and analysis where we collected information through several questionnaires and data generated by the tool during the course. Among the main findings we highlight: (1) the usage of the tool was adequate in relation to students' own testing; (2) its feedback needs to be richer in order to improve its acceptance among students; and (3) there was no statistical evidence to claim Mooshak reduced the dropout rate.