The effect of topic set size on retrieval experiment error
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
An empirical study of information synthesis tasks
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
That's nice... what can you do with it?
Computational Linguistics
A study on the use of search engines for answering clinical questions
HIKM '10 Proceedings of the Fourth Australasian Workshop on Health Informatics and Knowledge Management - Volume 108
Comparing rating scales and preference judgements in language evaluation
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Generating referring expressions in context: the GREC task evaluation challenges
Empirical methods in natural language generation
Simultaneous joint and conditional modeling of documents tagged from two perspectives
Proceedings of the 20th ACM international conference on Information and knowledge management
Hi-index | 0.00 |
The Document Understanding Conference (DUC) 2005 evaluation had a single user-oriented, question-focused summarization task, which was to synthesize from a set of 25--50 documents a well-organized, fluent answer to a complex question. The evaluation shows that the best summarization systems have difficulty extracting relevant sentences in response to complex questions (as opposed to representative sentences that might be appropriate to a generic summary). The relatively generous allowance of 250 words for each answer also reveals how difficult it is for current summarization systems to produce fluent text from multiple documents.