Native judgments of non-native usage: experiments in preposition error detection

  • Authors:
  • Joel R. Tetreault;Martin Chodorow

  • Affiliations:
  • Educational Testing Service, Princeton, NJ;Hunter College of CUNY, New York, NY

  • Venue:
  • HumanJudge '08 Proceedings of the Workshop on Human Judgements in Computational Linguistics
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evaluation and annotation are two of the greatest challenges in developing NLP instructional or diagnostic tools to mark grammar and usage errors in the writing of non-native speakers. Past approaches have commonly used only one rater to annotate a corpus of learner errors to compare to system output. In this paper, we show how using only one rater can skew system evaluation and then we present a sampling approach that makes it possible to evaluate a system more efficiently.