A theory of questions and question asking
Understanding language understanding
Building a question answering test collection
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
The TREC question answering track
Natural Language Engineering
HITIQA: an interactive question answering system a preliminary report
MultiSumQA '03 Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering - Volume 12
WoZ simulation of interactive question answering
IQA '06 Proceedings of the Interactive Question Answering Workshop at HLT-NAACL 2006
IEA/AIE'07 Proceedings of the 20th international conference on Industrial, engineering, and other applications of applied intelligent systems
Answering contextual questions based on the cohesion with knowledge
ICCPOL'06 Proceedings of the 21st international conference on Computer Processing of Oriental Languages: beyond the orient: the research challenges ahead
Answering contextual questions based on ontologies and question templates
Frontiers of Computer Science in China
Hi-index | 0.02 |
There are strong expectations for the use of question answering technologies in information access dialogues, such as for information gathering and browsing. In this paper, we empirically examine what kinds of abilities are needed for question answering systems in such situations, and propose a challenge for evaluating those abilities objectively and quantitatively. We also show that existing technologies have the potential to address this challenge. From the empirical study, we found that questions that have values and names as answers account for a majority in realistic information-gathering situations and that those sequences of questions contain a wide range of reference expressions and are sometimes complicated by the inclusion of subdialogues and focus shifts. The challenge proposed is not only novel as an evaluation of the handling of information access dialogues, but also includes several valuable ideas such as categorization and characterization of information access dialogues, and introduces three measures to evaluate various aspects in addressing list-type questions and reference test sets for evaluating context-processing ability in isolation.