Will pyramids built of nuggets topple over?
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Contextual factors affecting the utility of surrogates within exploratory search
Information Processing and Management: an International Journal
Simultaneous multilingual search for translingual information retrieval
Proceedings of the 17th ACM conference on Information and knowledge management
Articulating complex information needs using query templates
Journal of Information Science
Methods for Evaluating Interactive Information Retrieval Systems with Users
Foundations and Trends in Information Retrieval
Improving complex interactive question answering with Wikipedia anchor text
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
Answer diversification for complex question answering on the web
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part I
Question answering at the cross-language evaluation forum 2003---2010
Language Resources and Evaluation
Collaboratively built semi-structured content and Artificial Intelligence: The story so far
Artificial Intelligence
Evaluating semantic search query approaches with expert and casual users
ISWC'12 Proceedings of the 11th international conference on The Semantic Web - Volume Part II
Hi-index | 0.00 |
Growing interest in interactive systems for answering complex questions lead to the development of the complex, interactive QA (ciQA) task, introduced for the first time at TREC 2006. This paper describes the rationale and design of the ciQA task and the evaluation results. Thirty complex relationship questions based on five question templates were investigated using the AQUAINT collection of newswire text. Interaction forms were the primary vehicle for defining and capturing user-system interactions. In total, six groups participated in the ciQA task and contributed ten different sets of interaction forms. There were two main findings: baseline IR techniques are competitive for complex QA and interaction, at least as defined and implemented in this evaluation, did not appear to improve performance by much.