Findings of the 2012 workshop on statistical machine translation

  • Authors:
  • Chris Callison-Burch;Philipp Koehn;Christof Monz;Matt Post;Radu Soricut;Lucia Specia

  • Affiliations:
  • Johns Hopkins University;University of Edinburgh;University of Amsterdam;Johns Hopkins University;SDL Language Weaver;University of Sheffield

  • Venue:
  • WMT '12 Proceedings of the Seventh Workshop on Statistical Machine Translation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents the results of the WMT12 shared tasks, which included a translation task, a task for machine translation evaluation metrics, and a task for run-time estimation of machine translation quality. We conducted a large-scale manual evaluation of 103 machine translation systems submitted by 34 teams. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 12 evaluation metrics. We introduced a new quality estimation task this year, and evaluated submissions from 11 teams.