They can help: using crowdsourcing to improve the evaluation of grammatical error detection systems

  • Authors:
  • Nitin Madnani;Joel Tetreault;Martin Chodorow;Alla Rozovskaya

  • Affiliations:
  • Educational Testing Service, Princeton, NJ;Educational Testing Service, Princeton, NJ;Hunter College of CUNY;University of Illinois at Urbana-Champaign

  • Venue:
  • HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Despite the rising interest in developing grammatical error detection systems for non-native speakers of English, progress in the field has been hampered by a lack of informative metrics and an inability to directly compare the performance of systems developed by different researchers. In this paper we address these problems by presenting two evaluation methodologies, both based on a novel use of crowdsourcing.