Subtopic structuring for full-length document access
SIGIR '93 Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval
Passage-level evidence in document retrieval
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Document language models, query models, and risk minimization for information retrieval
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
The role of context in question answering systems
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Quantitative evaluation of passage retrieval algorithms for question answering
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Natural language question answering: the view from here
Natural Language Engineering
A study of smoothing methods for language models applied to information retrieval
ACM Transactions on Information Systems (TOIS)
ACM SIGIR Forum
A question/answer typology with surface text patterns
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Using syntactic information for improving why-question answering
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Document retrieval in the context of question answering
ECIR'03 Proceedings of the 25th European conference on IR research
Using syntactic information for improving why-question answering
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
What is not in the bag of words for why-qa?
Computational Linguistics
Hi-index | 0.00 |
The information retrieval (IR) community has investigated many different techniques to retrieve passages from large collections of documents for question answering (QA). In this paper, we specifically examine and quantitatively compare the impact of passage retrieval for QA using sliding windows and disjoint windows. We consider two different data sets, the TREC 2002--2003 QA data set, and 93 why-questions against INEX Wikipedia. We discovered that, compared to disjoint windows, using sliding windows results in improved performance of TREC-QA in terms of TDRR, and in improved performance of why-QA in terms of success@n and MRR.