Generating natural language summaries from multiple on-line sources
Computational Linguistics - Special issue on natural language generation
Multidocument summarization via information extraction
HLT '01 Proceedings of the first international conference on Human language technology research
Tracking and summarizing news on a daily basis with Columbia's Newsblaster
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Performance confidence estimation for automatic summarization
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Towards content-level coherence with aspect-guided summarization
ACM Transactions on Speech and Language Processing (TSLP)
Hi-index | 0.00 |
Abstractive summarization has been a longstanding and long-term goal in automatic summarization, because systems that can generate abstracts demonstrate a deeper understanding of language and the meaning of documents than systems that merely extract sentences from those documents. Genest (2009) showed that summaries from the top automatic summarizers are judged as comparable to manual extractive summaries, and both are judged to be far less responsive than manual abstracts, As the state of the art approaches the limits of extractive summarization, it becomes even more pressing to advance abstractive summarization. However, abstractive summarization has been sidetracked by questions of what qualifies as important information, and how do we find it? The Guided Summarization task introduced at the Text Analysis Conference 2010 attempts to neutralize both of these problems by introducing topic categories and lists of aspects that a responsive summary should address. This design results in more similar human models, giving the automatic summarizers a more focused target to pursue, and also provides detailed diagnostics of summary content, which can can help build better meaning-oriented summarization systems.