Statistics-Based Summarization - Step One: Sentence Compression
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Coarse-to-fine n-best parsing and MaxEnt discriminative reranking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Sentence compression beyond word deletion
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Global inference for sentence compression an integer linear programming approach
Journal of Artificial Intelligence Research
From extractive to abstractive meeting summaries: can it be done by sentence compression?
ACLShort '09 Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
The impact of language models and loss functions on repair disfluency detection
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Evaluating sentence compression: pitfalls and suggested remedies
MTTG '11 Proceedings of the Workshop on Monolingual Text-To-Text Generation
Hi-index | 0.00 |
This paper presents a two-step approach to compress spontaneous spoken utterances. In the first step, we use a sequence labeling method to determine if a word in the utterance can be removed, and generate n-best compressed sentences. In the second step, we use a discriminative training approach to capture sentence level global information from the candidates and rerank them. For evaluation, we compare our system output with multiple human references. Our results show that the new features we introduced in the first compression step improve performance upon the previous work on the same data set, and reranking is able to yield additional gain, especially when training is performed to take into account multiple references.