A trainable document summarizer
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
Examining the consensus between human summaries: initial experiments with factoid analysis
HLT-NAACL-DUC '03 Proceedings of the HLT-NAACL 03 on Text summarization workshop - Volume 5
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
The Pyramid Method: Incorporating human content selection variation in summarization evaluation
ACM Transactions on Speech and Language Processing (TSLP)
Document summarization using conditional random fields
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
The automatic creation of literature abstracts
IBM Journal of Research and Development
Evaluation of a sentence ranker for text summarization based on Roget's thesaurus
TSD'10 Proceedings of the 13th international conference on Text, speech and dialogue
Toward a gold standard for extractive text summarization
AI'10 Proceedings of the 23rd Canadian conference on Advances in Artificial Intelligence
GEMS: generative modeling for evaluation of summaries
CICLing'10 Proceedings of the 11th international conference on Computational Linguistics and Intelligent Text Processing
A progressive sentence selection strategy for document summarization
Information Processing and Management: an International Journal
Hi-index | 0.00 |
In the context of the Document Understanding Conferences, the task of Query-Focused Multi-Document Summarization is intended to improve agreement in content among human-generated model summaries. Query-focus also aids the automated summarizers in directing the summary at specific topics, which may result in better agreement with these model summaries. However, while query focus correlates with performance, we show that high-performing automatic systems produce summaries with disproportionally higher query term density than human summarizers do. Experimental evidence suggests that automatic systems heavily rely on query term occurrence and repetition to achieve good performance.