The NIST 2008 Metrics for machine translation challenge--overview, methodology, metrics, and results

  • Authors:
  • Mark Przybocki;Kay Peterson;Sébastien Bronsart;Gregory Sanders

  • Affiliations:
  • Multimodal Information Group, National Institute of Standards and Technology, Gaithersburg, USA;Multimodal Information Group, National Institute of Standards and Technology, Gaithersburg, USA;Multimodal Information Group, National Institute of Standards and Technology, Gaithersburg, USA;Multimodal Information Group, National Institute of Standards and Technology, Gaithersburg, USA

  • Venue:
  • Machine Translation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper discusses the evaluation of automated metrics developed for the purpose of evaluating machine translation (MT) technology. A general discussion of the usefulness of automated metrics is offered. The NIST MetricsMATR evaluation of MT metrology is described, including its objectives, protocols, participants, and test data. The methodology employed to evaluate the submitted metrics is reviewed. A summary is provided for the general classes of evaluated metrics. Overall results of this evaluation are presented, primarily by means of correlation statistics, showing the degree of agreement between the automated metric scores and the scores of human judgments. Metrics are analyzed at the sentence, document, and system level with results conditioned by various properties of the test data. This paper concludes with some perspective on the improvements that should be incorporated into future evaluations of metrics for MT evaluation.