Morpho-syntactic information for automatic error analysis of statistical machine translation output
StatMT '06 Proceedings of the Workshop on Statistical Machine Translation
Linguistic measures for automatic machine translation evaluation
Machine Translation
Language Resources and Evaluation
Blast: a tool for error analysis of machine translation output
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Systems Demonstrations
Automatic translation error analysis
TSD'11 Proceedings of the 14th international conference on Text, speech and dialogue
Towards automatic error analysis of machine translation output
Computational Linguistics
Corroborating text evaluation results with heterogeneous measures
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Hi-index | 0.00 |
Error analysis in machine translation is a necessary step in order to investigate the strengths and weaknesses of the MT systems under development and allow fair comparisons among them. This work presents an application that shows how a set of heterogeneous automatic metrics can be used to evaluate a test bed of automatic translations. To do so, we have set up an online graphical interface for the Asiya toolkit, a rich repository of evaluation measures working at different linguistic levels. The current implementation of the interface shows constituency and dependency trees as well as shallow syntactic and semantic annotations, and word alignments. The intelligent visualization of the linguistic structures used by the metrics, as well as a set of navigational functionalities, may lead towards advanced methods for automatic error analysis.