The evaluation of electronic marking of examinations

  • Authors:
  • Pete Thomas

  • Affiliations:
  • Open University, Milton Keynes, UK

  • Venue:
  • Proceedings of the 8th annual conference on Innovation and technology in computer science education
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper discusses an approach to the electronic (automatic) marking of examination papers, in particular, the extent to which it is possible to mark a candidate's answers automatically and return, within a very short period of time, a result that would be comparable with a manually produced score. The investigation showed that there are good reasons for manual intervention in a predominantly automatic process. The paper discusses the results of tests of the automatic marking process that in two experiments yielded grades for examination scripts that are comparable with human markers (although the automatic grade tends to be the lower of the two). An analysis of the correlations between the human and automatic markers shows highly significant relationships between the human markers (between 0.91 and 0.95) and a significant relationship between the average human marker score and the electronic score (0.86).