Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation

  • Authors:
  • Chris Callison-Burch;Philipp Koehn;Christof Monz;Kay Peterson;Mark Przybocki;Omar F. Zaidan

  • Affiliations:
  • Johns Hopkins University;University of Edinburgh;University of Amsterdam;National Institute of Standards and Technology;National Institute of Standards and Technology;Johns Hopkins University

  • Venue:
  • WMT '10 Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon's Mechanical Turk.