Scaling question answering to the web
ACM Transactions on Information Systems (TOIS)
On the MSE robustness of batching estimators
Proceedings of the 33nd conference on Winter simulation
Accurately interpreting clickthrough data as implicit feedback
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Automatic detection of causal relations for Question Answering
MultiSumQA '03 Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering - Volume 12
Knowledge sharing and yahoo answers: everyone knows something
Proceedings of the 17th international conference on World Wide Web
Evaluating Causal Questions for Question Answering
ENC '08 Proceedings of the 2008 Mexican International Conference on Computer Science
Mining Causality from Texts for Question Answering System
IEICE - Transactions on Information and Systems
What is not in the bag of words for why-qa?
Computational Linguistics
Learning to rank for why-question answering
Information Retrieval
Hi-index | 0.00 |
We investigated to what extent users could be satisfied by a web search engine for answering causal questions. We used an assessment environment in which a web search interface was simulated. For 1 401 why-queries from a search engine log we pre-retrieved the first 10 results using Bing. 311 queries were assessed by human judges. We found that even without clicking a result, 25.2% of the why-questions is answered on the first result page. If we count an intended click on a result as a vote for relevance, then 74.4% of the why-questions gets at least one relevant answer in the top-10. 10% of why-queries asked to web search engines are not answerable according to human assessors.