(Meta-) evaluation of machine translation

  • Authors:
  • Chris Callison-Burch;Cameron Fordyce;Philipp Koehn;Christof Monz;Josh Schroeder

  • Affiliations:
  • Johns Hopkins University;CELCT;University of Edinburgh;University of London;University of Edinburgh

  • Venue:
  • StatMT '07 Proceedings of the Second Workshop on Statistical Machine Translation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper evaluates the translation quality of machine translation systems for 8 language pairs: translating French, German, Spanish, and Czech to English and back. We carried out an extensive human evaluation which allowed us not only to rank the different MT systems, but also to perform higher-level analysis of the evaluation process. We measured timing and intra- and inter-annotator agreement for three types of subjective evaluation. We measured the correlation of automatic evaluation metrics with human judgments. This meta-evaluation reveals surprising facts about the most commonly used methodologies.