A maximum entropy approach to natural language processing
Computational Linguistics
A maximum-entropy-inspired parser
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Discriminative training and maximum entropy models for statistical machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
A maximum entropy/minimum divergence translation model
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Maximum entropy models for named entity recognition
CONLL '03 Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4
Non-projective dependency parsing using spanning tree algorithms
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
The best of two worlds: cooperation of statistical and rule-based taggers for Czech
ACL '07 Proceedings of the Workshop on Balto-Slavonic Natural Language Processing: Information Extraction and Enabling Technologies
TectoMT: highly modular MT system with tectogrammatics used as transfer layer
StatMT '08 Proceedings of the Third Workshop on Statistical Machine Translation
Hidden Markov tree model in dependency-based machine translation
ACLShort '09 Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
Feature-rich translation by quasi-synchronous lattice parsing
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
WMT '10 Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR
Influence of parser choice on dependency-based MT
WMT '11 Proceedings of the Sixth Workshop on Statistical Machine Translation
Formemes in English-Czech deep syntactic MT
WMT '12 Proceedings of the Seventh Workshop on Statistical Machine Translation
Hi-index | 0.00 |
Maximum Entropy Principle has been used successfully in various NLP tasks. In this paper we propose a forward translation model consisting of a set of maximum entropy classifiers: a separate classifier is trained for each (sufficiently frequent) source-side lemma. In this way the estimates of translation probabilities can be sensitive to a large number of features derived from the source sentence (including non-local features, features making use of sentence syntactic structure, etc.). When integrated into English-to-Czech dependency-based translation scenario implemented in the TectoMT framework, the new translation model significantly outperforms the baseline model (MLE) in terms of BLEU. The performance is further boosted in a configuration inspired by Hidden Tree Markov Models which combines the maximum entropy translation model with the target-language dependency tree model.