Improving SMT quality with morpho-syntactic analysis
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 2
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Toward hierarchical models for statistical machine translation of inflected languages
DMMT '01 Proceedings of the workshop on Data-driven methods in machine translation - Volume 14
Extending the BLEU MT evaluation method with frequency weightings
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Improving statistical MT through morphological analysis
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Automatic evaluation of machine translation quality using n-gram co-occurrence statistics
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Morphological analysis for statistical machine translation
HLT-NAACL-Short '04 Proceedings of HLT-NAACL 2004: Short Papers
ParaText '05 Proceedings of the ACL Workshop on Building and Using Parallel Texts
Statistical machine translation
ACM Computing Surveys (CSUR)
On the impact of morphology in English to Spanish statistical MT
Speech Communication
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Word error rates: decomposition over Pos classes and applications for error analysis
StatMT '07 Proceedings of the Second Workshop on Statistical Machine Translation
Linguistically annotated reordering: Evaluation and analysis
Computational Linguistics
Language Resources and Evaluation
Blast: a tool for error analysis of machine translation output
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Systems Demonstrations
Automatic translation error analysis
TSD'11 Proceedings of the 14th international conference on Text, speech and dialogue
Towards automatic error analysis of machine translation output
Computational Linguistics
A graphical interface for MT evaluation and error analysis
ACL '12 Proceedings of the ACL 2012 System Demonstrations
CICLing'13 Proceedings of the 14th international conference on Computational Linguistics and Intelligent Text Processing - Volume 2
Statistical machine translation enhancements through linguistic levels: A survey
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Evaluation of machine translation output is an important but difficult task. Over the last years, a variety of automatic evaluation measures have been studied, some of them like Word Error Rate (WER), Position Independent Word Error Rate (PER) and BLEU and NIST scores have become widely used tools for comparing different systems as well as for evaluating improvements within one system. However, these measures do not give any details about the nature of translation errors. Therefore some analysis of the generated output is needed in order to identify the main problems and to focus the research efforts. On the other hand, human evaluation is a time consuming and expensive task. In this paper, we investigate methods for using of morpho-syntactic information for automatic evaluation: standard error measures WER and PER are calculated on distinct word classes and forms in order to get a better idea about the nature of translation errors and possibilities for improvements.