ACM Computing Surveys (CSUR)
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Japanese dependency analysis using cascaded chunking
COLING-02 proceedings of the 6th conference on Natural language learning - Volume 20
Automatic evaluation of machine translation quality using n-gram co-occurrence statistics
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Application of Word Alignment for Supporting Translation of Japanese Statutes into English
Proceedings of the 2006 conference on Legal Knowledge and Information Systems: JURIX 2006: The Nineteenth Annual Conference
Hi-index | 0.00 |
We propose new translation evaluation metrics for legal sentences. Since most previous metrics, that have been proposed to evaluate machine translation systems, prepare human reference translations and assume that several correct translations exist for one source sentence. However, readers usually believe that different translations denote different meanings, so that the existence of several translations of one legal expression may confuse them. Therefore, since translation variety is unacceptable and consistency is crucial in legal translation, we propose two metrics to evaluate the consistency of legal translations and illustrate their performances by comparing them with other metrics.