The Problem of Labels in E-Assessment of Diagrams

  • Authors:
  • Ambikesh Jayal;Martin Shepperd

  • Affiliations:
  • Brunel University;Brunel University

  • Venue:
  • Journal on Educational Resources in Computing (JERIC)
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this article we explore a problematic aspect of automated assessment of diagrams. Diagrams have partial and sometimes inconsistent semantics. Typically much of the meaning of a diagram resides in the labels; however, the choice of labeling is largely unrestricted. This means a correct solution may utilize differing yet semantically equivalent labels to the specimen solution. With human marking this problem can be easily overcome. Unfortunately with e-assessment this is challenging. We empirically explore the scale of the problem of synonyms by analyzing 160 student solutions to a UML task. From this we find that cumulative growth of synonyms only shows a limited tendency to reduce at the margin despite using a range of text processing algorithms such as stemming and auto-correction of spelling errors. This finding has significant implications for the ease in which we may develop future e-assessment systems of diagrams, in that the need for better algorithms for assessing label semantic similarity becomes inescapable.