An automated student diagram assessment system
ITiCSE '98 Proceedings of the 6th annual conference on the teaching of computing and the 3rd annual conference on Integrating technology into computer science education: Changing the delivery of computer science education
The marking system for CourseMaster
Proceedings of the 7th annual conference on Innovation and technology in computer science education
KERMIT: A Constraint-Based Tutor for Database Modeling
ITS '02 Proceedings of the 6th International Conference on Intelligent Tutoring Systems
The SCHOLAR Programme in Scottish Education
ICCE '02 Proceedings of the International Conference on Computers in Education
TOKA: A Computer Assisted Assessment Tool Integrated in a Real Use Context
ICALT '05 Proceedings of the Fifth IEEE International Conference on Advanced Learning Technologies
Addressing the testing challenge with a web-based e-assessment system that tutors as it assesses
Proceedings of the 15th international conference on World Wide Web
Formative computer based assessment in diagram based domains
Proceedings of the 11th annual SIGCSE conference on Innovation and technology in computer science education
Generalised diagram revision tools with automatic marking
ITiCSE '09 Proceedings of the 14th annual ACM SIGCSE conference on Innovation and technology in computer science education
Marking student programs using graph similarity
Computers & Education
Diagram interpretation and e-learning systems
Diagrams'10 Proceedings of the 6th international conference on Diagrammatic representation and inference
Hi-index | 0.00 |
In this article we explore a problematic aspect of automated assessment of diagrams. Diagrams have partial and sometimes inconsistent semantics. Typically much of the meaning of a diagram resides in the labels; however, the choice of labeling is largely unrestricted. This means a correct solution may utilize differing yet semantically equivalent labels to the specimen solution. With human marking this problem can be easily overcome. Unfortunately with e-assessment this is challenging. We empirically explore the scale of the problem of synonyms by analyzing 160 student solutions to a UML task. From this we find that cumulative growth of synonyms only shows a limited tendency to reduce at the margin despite using a range of text processing algorithms such as stemming and auto-correction of spelling errors. This finding has significant implications for the ease in which we may develop future e-assessment systems of diagrams, in that the need for better algorithms for assessing label semantic similarity becomes inescapable.