Short answer assessment: establishing links between research strands

  • Authors:
  • Ramon Ziai;Niels Ott;Detmar Meurers

  • Affiliations:
  • Universität Tübingen;Universität Tübingen;Universität Tübingen

  • Venue:
  • Proceedings of the Seventh Workshop on Building Educational Applications Using NLP
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

A number of different research subfields are concerned with the automatic assessment of student answers to comprehension questions, from language learning contexts to computer science exams. They share the need to evaluate free-text answers but differ in task setting and grading/evaluation criteria, among others. This paper has the intention of fostering synergy between the different research strands. It discusses the different research strands, details the crucial differences, and explores under which circumstances systems can be compared given publicly available data. To that end, we present results with the CoMiC-EN Content Assessment system (Meurers et al., 2011a) on the dataset published by Mohler et al. (2011) and outline what was necessary to perform this comparison. We conclude with a general discussion on comparability and evaluation of short answer assessment systems.