Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Fluency, adequacy, or HTER?: exploring different human judgments with a tunable MT metric
StatMT '09 Proceedings of the Fourth Workshop on Statistical Machine Translation
The Meteor metric for automatic evaluation of machine translation
Machine Translation
Generating non-projective word order in statistical linearization
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Hi-index | 0.00 |
This abstract reports on our submission to the shallow track for the Generation Challenges 2011 Surface Realization Shared Task. This system is intended to be a minimal system in the sense that it uses (almost) no lexical, syntactic or semantic information other than that found in the training corpus itself. The system architecture was motivated by work done on FERGUS (Bangalore and Rambow, 2000). The system uses three information sources, each acquired from the training corpus: is a localized tree model capturing information from the dependency tree; a trigram language model capturing word order information for words in the same subtree; and a morphological dictionary. In the sections below we briefly present each of these models.