Automatic text summarization of newswire: lessons learned from the document understanding conference

  • Authors:
  • Ani Nenkova

  • Affiliations:
  • Columbia University, New York, NY

  • Venue:
  • AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 3
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Since 2001, the Document Understanding Conferences have been the forum for researchers in automatic text summarization to compare methods and results on common test sets. Over the years, several types of summarization tasks have been addressed--single document summarization, multi-document summarization, summarization focused by question, and headline generation. This paper is an overview of the achieved results in the different types of summarization tasks. We compare both the broader classes of baselines, systems and humans, as well as individual pairs of summarizers (both human and automatic). An analysis of variance model is fitted, with summarizer and input set as independent variables, and the coverage score as the dependent variable, and simulation-based multiple comparisons were performed. The results document the progress in the field as a whole, rather then focusing on a single system, and thus can serve as a future reference on the work done up to date, as well as a starting point in the formulation of future tasks. Results also indicate that most progress in the field has been achieved in generic multi-document summarization and that the most challenging task is that of producing a focused summary in answer to a question/topic.