Diagnosing meaning errors in short answers to reading comprehension questions

  • Authors:
  • Stacey Bailey;Detmar Meurers

  • Affiliations:
  • The Ohio State University, Columbus, Ohio;Universität Tübingen Wilhelmstrasse, Tübingen, Germany

  • Venue:
  • EANL '08 Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A common focus of systems in Intelligent Computer-Assisted Language Learning (ICALL) is to provide immediate feedback to language learners working on exercises. Most of this research has focused on providing feedback on the form of the learner input. Foreign language practice and second language acquisition research, on the other hand, emphasizes the importance of exercises that require the learner to manipulate meaning. The ability of an ICALL system to diagnose and provide feedback on the meaning conveyed by a learner response depends on how well it can deal with the response variation allowed by an activity. We focus on short-answer reading comprehension questions which have a clearly defined target response but the learner may convey the meaning of the target in multiple ways. As empirical basis of our work, we collected an English as a Second Language (ESL) learner corpus of short-answer reading comprehension questions, for which two graders provided target answers and correctness judgments. On this basis, we developed a Content-Assessment Module (CAM), which performs shallow semantic analysis to diagnose meaning errors. It reaches an accuracy of 88% for semantic error detection and 87% on semantic error diagnosis on a held-out test data set.