BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Minimum error rate training in statistical machine translation
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Phrasal cohesion and statistical machine translation
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
Clause restructuring for statistical machine translation
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Improving a statistical MT system with automatically learned rewrite patterns
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Measuring Word Alignment Quality for Statistical Machine Translation
Computational Linguistics
Algorithms for deterministic incremental dependency parsing
Computational Linguistics
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Predicting success in machine translation
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Using a dependency parser to improve SMT for subject-object-verb languages
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
A quantitative analysis of reordering phenomena
StatMT '09 Proceedings of the Fourth Workshop on Statistical Machine Translation
Discriminative reordering models for statistical machine translation
StatMT '06 Proceedings of the Workshop on Statistical Machine Translation
The Meteor metric for automatic evaluation of machine translation
Machine Translation
Metrics for MT evaluation: evaluating reordering
Machine Translation
LRscore for evaluating lexical and reordering quality in MT
WMT '10 Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR
Automatic evaluation of translation quality for distant language pairs
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Training a parser for machine translation reordering
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Training dependency parsers by jointly optimizing multiple objectives
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Training a parser for machine translation reordering
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Inducing sentence structure from parallel corpora for reordering
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Training dependency parsers by jointly optimizing multiple objectives
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
PLUTO: automated solutions for patent translation
EACL 2012 Proceedings of the Joint Workshop on Exploiting Synergies between Information Retrieval and Machine Translation (ESIRMT) and Hybrid Approaches to Machine Translation (HyTra)
Forced derivation tree based model training to statistical machine translation
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Inducing a discriminative parser to optimize machine translation reordering
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
A model based transformation paradigm for cross-language collaborations
Advanced Engineering Informatics
Hi-index | 0.00 |
Reordering is a major challenge for machine translation between distant languages. Recent work has shown that evaluation metrics that explicitly account for target language word order correlate better with human judgments of translation quality. Here we present a simple framework for evaluating word order independently of lexical choice by comparing the system's reordering of a source sentence to reference reordering data generated from manually word-aligned translations. When used to evaluate a system that performs reordering as a preprocessing step our framework allows the parser and reordering rules to be evaluated extremely quickly without time-consuming end-to-end machine translation experiments. A novelty of our approach is that the translations used to generate the reordering reference data are generated in an alignment-oriented fashion. We show that how the alignments are generated can significantly effect the robustness of the evaluation. We also outline some ways in which this framework has allowed our group to analyze reordering errors for English to Japanese machine translation.