Evaluating and automating the annotation of a learner corpus

  • Authors:
  • Alexandr Rosen;Jirka Hana;Barbora Štindlová;Anna Feldman

  • Affiliations:
  • Charles University, Prague, Czech Republic;Charles University, Prague, Czech Republic;Technical University, Liberec, Czech Republic;Montclair State University, Montclair, USA

  • Venue:
  • Language Resources and Evaluation
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper describes a corpus of texts produced by non-native speakers of Czech. We discuss its annotation scheme, consisting of three interlinked tiers, designed to handle a wide range of error types present in the input. Each tier corrects different types of errors; links between the tiers allow capturing errors in word order and complex discontinuous expressions. Errors are not only corrected, but also classified. The annotation scheme is tested on a data set including approx. 175,000 words with fair inter-annotator agreement results. We also explore the possibility of applying automated linguistic annotation tools (taggers, spell checkers and grammar checkers) to the learner text to support or even substitute manual annotation.