A new quantitative quality measure for machine translation systems
COLING '92 Proceedings of the 14th conference on Computational linguistics - Volume 2
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Feedback cleaning of machine translation rules using automatic evaluation
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
An automatic method for summary evaluation using multiple evaluation results by a manual method
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Hi-index | 0.00 |
The main goal of this paper is to propose automatic schemes for the translation paired comparison method. This method was proposed to precisely evaluate a speech translation system's capability. Furthermore, the method gives an objective evaluation result, i.e., a score of the Test of English for International Communication (TOEIC). The TOEIC score is used as a measure of one's speech translation capability. However, this method requires tremendous evaluation costs. Accordingly, automatization of this method is an important subject for study. In the proposed method, currently available automatic evaluation methods are applied to automate the translation paired comparison method. In the experiments, several automatic evaluation methods (BLEU, NIST, DP-based method) are applied. The experimental results of these automatic measures show a good correlation with evaluation results of the translation paired comparison method.