New Methods in Automatic Extracting
Journal of the ACM (JACM)
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Answering complex questions with random walk models
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Older versions of the ROUGEeval summarization evaluation system were easier to fool
Information Processing and Management: an International Journal
User preference choices for complex question answering
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
FastSum: fast and accurate query-based multi-document summarization
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
Putting the user in the loop: interactive Maximal Marginal Relevance for query-focused summarization
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
A new approach to improving multilingual summarization using a genetic algorithm
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Web search solved?: all result rankings the same?
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
A reinforcement learning framework for answering complex questions
Proceedings of the 16th international conference on Intelligent user interfaces
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Hi-index | 0.00 |
This paper addresses the task of answering complex questions using a multi-document summarization approach within a reinforcement learning setting. Given a set of complex questions, a list of relevant documents per question, and the corresponding human-generated summaries (i.e. answers to the questions) as training data, the reinforcement learning module iteratively learns a number of feature weights in order to facilitate the automatic generation of summaries i.e. answers to unseen complex questions. Previous works on this task have utilized a fully automatic reinforcement learning framework that selects the document sentences as the potential candidate (i.e. machine-generated) summary sentences by exploiting a relatedness measure with the available human-written summaries. In this paper, we propose an extension to this model that incorporates user interaction into the reinforcement learner to guide the candidate summary sentence selection process. Experimental results reveal the effectiveness of the user interaction component in the reinforcement learning framework.