A maximum entropy approach to natural language processing
Computational Linguistics
Target-Text Mediated Interactive Machine Translation
Machine Translation
Inducing Features of Random Fields
Inducing Features of Random Fields
The mathematics of statistical machine translation: parameter estimation
Computational Linguistics - Special issue on using large corpora: II
Unit completion for a computer-aided translation typing system
ANLC '00 Proceedings of the sixth conference on Applied natural language processing
A DP based search algorithm for statistical machine translation
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
A maximum entropy/minimum divergence translation model
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Text Prediction with Fuzzy Alignments
AMTA '02 Proceedings of the 5th Conference of the Association for Machine Translation in the Americas on Machine Translation: From Research to Real Users
AMTA '02 Proceedings of the 5th Conference of the Association for Machine Translation in the Americas on Machine Translation: From Research to Real Users
Trans Type: Development-Evaluation Cycles to Boost Translator's Productivity
Machine Translation
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
A maximum entropy/minimum divergence translation model
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
User-friendly text prediction for translators
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
Confidence estimation for translation prediction
CONLL '03 Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4
Hi-index | 0.01 |
I describe two methods for incorporating information about the relative positions of bilingual word pairs into a Maximum Entropy/Minimum Divergence translation model. The better of the two achieves over 40% lower test corpus perplexity than an equivalent combination of a trigram language model and the classical IBM translation model 2.