The syntactic process
The mathematics of statistical machine translation: parameter estimation
Computational Linguistics - Special issue on using large corpora: II
Stochastic inversion transduction grammars and bilingual parsing of parallel corpora
Computational Linguistics
New figures of merit for best-first probabilistic chart parsing
Computational Linguistics
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Generative models for statistical parsing with Combinatory Categorial Grammar
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Statistical phrase-based translation
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Sentence Fusion for Multidocument News Summarization
Computational Linguistics
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Hierarchical Phrase-Based Translation
Computational Linguistics
CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank
Computational Linguistics
Wide-coverage efficient statistical parsing with ccg and log-linear models
Computational Linguistics
Moses: open source toolkit for statistical machine translation
ACL '07 Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
LTAG dependency parsing with bidirectional incremental construction
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Tree linearization in English: improving language model based approaches
NAACL-Short '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
An efficient algorithm for easy-first non-directional dependency parsing
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Fluency constraints for minimum Bayes-risk decoding of statistical machine translation lattices
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Broad coverage multilingual deep sentence generation with a stochastic multi-level realizer
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Exact decoding of syntactic translation models through Lagrangian relaxation
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Syntax-based grammaticality improvement using CCG and guided search
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
DCU*at generation challenges 2011 surface realisation track
ENLG '11 Proceedings of the 13th European Workshop on Natural Language Generation
Partial-tree linearization: generalized word ordering for text synthesis
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
A fundamental problem in text generation is word ordering. Word ordering is a computationally difficult problem, which can be constrained to some extent for particular applications, for example by using synchronous grammars for statistical machine translation. There have been some recent attempts at the unconstrained problem of generating a sentence from a multi-set of input words (Wan et al., 2009; Zhang and Clark, 2011). By using CCG and learning guided search, Zhang and Clark reported the highest scores on this task. One limitation of their system is the absence of an N-gram language model, which has been used by text generation systems to improve fluency. We take the Zhang and Clark system as the baseline, and incorporate an N-gram model by applying online large-margin training. Our system significantly improved on the baseline by 3.7 BLEU points.