C4.5: programs for machine learning
C4.5: programs for machine learning
A systematic comparison of various statistical alignment models
Computational Linguistics
A machine learning approach to the automatic evaluation of machine translation
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Reliable measures for aligning Japanese-English news articles and sentences
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Automatic evaluation of machine translation quality using n-gram co-occurrence statistics
HLT '02 Proceedings of the second international conference on Human Language Technology Research
A machine learning-based evaluation method for machine translation
SETN'10 Proceedings of the 6th Hellenic conference on Artificial Intelligence: theories, models and applications
Hi-index | 0.00 |
Because human evaluation of machine translation is extensive but expensive, we often use automatic evaluation in developing a machine translation system. From viewpoint of evaluation cost, there are two types of evaluation methods: one uses (multiple) reference translation, e.g., METEOR, and the other classifies machine translation either into machine-like or human-like translation based on translation properties, i.e., a classification-based method. Previous studies showed that classification-based methods could perform evaluation properly. These studies constructed a classifier by learning linguistic properties of translation such as length of a sentence, syntactic complexity, and literal translation, and their classifiers marked high classification accuracy. These previous studies, however, have not examined whether their classification accuracy could present translation quality. Hence, we investigated whether classification accuracy depends on translation quality. The experiment results showed that our method could correctly distinguish the degrees of translation quality.