Machine translation evaluation versus quality estimation

  • Authors:
  • Lucia Specia;Dhwaj Raj;Marco Turchi

  • Affiliations:
  • Research Group in Computational Linguistics, University of Wolverhampton, Wolverhampton, UK;Indian Institute of Information Technology, Allahabad, India;European Commission --- JRC (IPSC), Ispra, Italy 21020

  • Venue:
  • Machine Translation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most evaluation metrics for machine translation (MT) require reference translations for each sentence in order to produce a score reflecting certain aspects of its quality. The de facto metrics, BLEU and NIST, are known to have good correlation with human evaluation at the corpus level, but this is not the case at the segment level. As an attempt to overcome these two limitations, we address the problem of evaluating the quality of MT as a prediction task, where reference-independent features are extracted from the input sentences and their translation, and a quality score is obtained based on models produced from training data. We show that this approach yields better correlation with human evaluation as compared to commonly used metrics, even with models trained on different MT systems, language-pairs and text domains.