Regression for machine translation evaluation at the sentence level

  • Authors:
  • Joshua S. Albrecht;Rebecca Hwa

  • Affiliations:
  • Department of Computer Science, University of Pittsburgh, Pittsburgh, USA 15260;Department of Computer Science, University of Pittsburgh, Pittsburgh, USA 15260

  • Venue:
  • Machine Translation
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Machine learning offers a systematic framework for developing metrics that use multiple criteria to assess the quality of machine translation (MT). However, learning introduces additional complexities that may impact on the resulting metric's effectiveness. First, a learned metric is more reliable for translations that are similar to its training examples; this calls into question whether it is as effective in evaluating translations from systems that are not its contemporaries. Second, metrics trained from different sets of training examples may exhibit variations in their evaluations. Third, expensive developmental resources (such as translations that have been evaluated by humans) may be needed as training examples. This paper investigates these concerns in the context of using regression to develop metrics for evaluating machine-translated sentences. We track a learned metric's reliability across a 5 year period to measure the extent to which the learned metric can evaluate sentences produced by other systems. We compare metrics trained under different conditions to measure their variations. Finally, we present an alternative formulation of metric training in which the features are based on comparisons against pseudo-references in order to reduce the demand on human produced resources. Our results confirm that regression is a useful approach for developing new metrics for MT evaluation at the sentence level.