Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Automatic generation of concise summaries of spoken dialogues in unrestricted domains
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Modern Information Retrieval
Summarizing archived discussions: a beginning
Proceedings of the 8th international conference on Intelligent user interfaces
Accurate methods for the statistics of surprise and coincidence
Computational Linguistics - Special issue on using large corpora: I
Generating natural language summaries from multiple on-line sources
Computational Linguistics - Special issue on natural language generation
iNeATS: interactive multi-document summarization
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2
A study for documents summarization based on personal annotation
HLT-NAACL-DUC '03 Proceedings of the HLT-NAACL 03 on Text summarization workshop - Volume 5
Digesting virtual "geek" culture: the summarization of technical internet relay chats
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Active Learning with Feedback on Features and Instances
The Journal of Machine Learning Research
Using summarization for automatic briefing generation
NAACL-ANLP-AutoSum '00 Proceedings of the 2000 NAACL-ANLP Workshop on Automatic Summarization
HLT-NAACL-Short '04 Proceedings of HLT-NAACL 2004: Short Papers
Summarizing non-textual events with a 'briefing' focus
Large Scale Semantic Access to Content (Text, Image, Video, and Sound)
Hi-index | 0.00 |
We describe a briefing system that learns to predict the contents of reports generated by users who create periodic (weekly) reports as part of their normal activity. The system observes content-selection choices that users make and builds a predictive model that could, for example, be used to generate an initial draft report. Using a feature of the interface the system also collects information about potential user-specific features. The system was evaluated under realistic conditions, by collecting data in a project-based university course where student group leaders were tasked with preparing weekly reports for the benefit of the instructors, using the material from individual student reports. This paper addresses the question of whether data derived from the implicit supervision provided by end-users is robust enough to support not only model parameter tuning but also a form of feature discovery. Results indicate that this is the case: system performance improves based on the feedback from user activity. We find that individual learned models (and features) are user-specific, although not completely idiosyncratic. Thismay suggest that approaches which seek to optimizemodels globally (say over a large corpus of data) may not in fact produce results acceptable to all individuals.