Evaluation of machine translation

  • Authors:
  • John S. White;Theresa A. O'Connell;Lynn M. Carlson

  • Affiliations:
  • PRC Inc., McLean, VA;PRC Inc., McLean, VA;DoD

  • Venue:
  • HLT '93 Proceedings of the workshop on Human Language Technology
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper reports results of the 1992 Evaluation of machine translation (MT) systems in the DARPA MT initiative and results of a Pre-test to the 1993 Evaluation. The DARPA initiative is unique in that the evaluated systems differ radically in languages translated, theoretical approach to system design, and intended end-user application. In the 1992 suite, a Comprehension Test compared the accuracy and interpretability of system and control outputs; a Quality Panel for each language pair judged the fidelity of translations from each source version. The 1993 suite evaluated adequacy and fluency and investigated three scoring methods.