Improving SMT quality with morpho-syntactic analysis
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 2
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Extending the BLEU MT evaluation method with frequency weightings
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Improving statistical MT through morphological analysis
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Automatic evaluation of machine translation quality using n-gram co-occurrence statistics
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Morpho-syntactic information for automatic error analysis of statistical machine translation output
StatMT '06 Proceedings of the Workshop on Statistical Machine Translation
(Meta-) evaluation of machine translation
StatMT '07 Proceedings of the Second Workshop on Statistical Machine Translation
Further meta-evaluation of machine translation
StatMT '08 Proceedings of the Third Workshop on Statistical Machine Translation
On the robustness of syntactic and semantic features for automatic MT evaluation
StatMT '09 Proceedings of the Fourth Workshop on Statistical Machine Translation
The contribution of linguistic features to automatic machine translation evaluation
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Machine Translation Errors: English and Iraqi Arabic
ACM Transactions on Asian Language Information Processing (TALIP)
All in strings: a powerful string-based automatic MT evaluation metric with multiple granularities
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Linguistic measures for automatic machine translation evaluation
Machine Translation
Automatic translation error analysis
TSD'11 Proceedings of the 14th international conference on Text, speech and dialogue
Corroborating text evaluation results with heterogeneous measures
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Toward determining the comprehensibility of machine translations
PITR '12 Proceedings of the First Workshop on Predicting and Improving Text Readability for target reader populations
CICLing'13 Proceedings of the 14th international conference on Computational Linguistics and Intelligent Text Processing - Volume 2
Statistical machine translation enhancements through linguistic levels: A survey
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Evaluation and error analysis of machine translation output are important but difficult tasks. In this work, we propose a novel method for obtaining more details about actual translation errors in the generated output by introducing the decomposition of Word Error Rate (Wer) and Position independent word Error Rate (Per) over different Part-of-Speech (Pos) classes. Furthermore, we investigate two possible aspects of the use of these decompositions for automatic error analysis: estimation of inflectional errors and distribution of missing words over Pos classes. The obtained results are shown to correspond to the results of a human error analysis. The results obtained on the European Parliament Plenary Session corpus in Spanish and English give a better overview of the nature of translation errors as well as ideas of where to put efforts for possible improvements of the translation system.