Validity of an Automatic Evaluation of Machine Translation Using a Word-Alignment-Based Classifier

  • Authors:
  • Katsunori Kotani;Takehiko Yoshimi;Takeshi Kutsumi;Ichiko Sata

  • Affiliations:
  • Kansai Gaidai University, Osaka, Japan 573-1001;Ryukoku Univerisity, Shiga, Japan 520-2194;Sharp Corporation, Nara, Japan 639-1186;Sharp Corporation, Nara, Japan 639-1186

  • Venue:
  • ICCPOL '09 Proceedings of the 22nd International Conference on Computer Processing of Oriental Languages. Language Technology for the Knowledge-based Economy
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Because human evaluation of machine translation is extensive but expensive, we often use automatic evaluation in developing a machine translation system. From viewpoint of evaluation cost, there are two types of evaluation methods: one uses (multiple) reference translation, e.g., METEOR, and the other classifies machine translation either into machine-like or human-like translation based on translation properties, i.e., a classification-based method. Previous studies showed that classification-based methods could perform evaluation properly. These studies constructed a classifier by learning linguistic properties of translation such as length of a sentence, syntactic complexity, and literal translation, and their classifiers marked high classification accuracy. These previous studies, however, have not examined whether their classification accuracy could present translation quality. Hence, we investigated whether classification accuracy depends on translation quality. The experiment results showed that our method could correctly distinguish the degrees of translation quality.