The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Performance issues and error analysis in an open-domain question answering system
ACM Transactions on Information Systems (TOIS)
The TREC question answering track
Natural Language Engineering
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Answering clinical questions with role identification
BioMed '03 Proceedings of the ACL 2003 workshop on Natural language processing in biomedicine - Volume 13
Journal of Biomedical Informatics
Question Answering in Restricted Domains: An Overview
Computational Linguistics
Answering Clinical Questions with Knowledge-Based and Statistical Techniques
Computational Linguistics
DUC 2005: evaluation of question-focused summarization systems
SumQA '06 Proceedings of the Workshop on Task-Focused Summarization and Question Answering
Overview of the CLEF 2004 multilingual question answering track
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
Hi-index | 0.00 |
This paper describes an evaluation of the answerability of a set of clinical questions posed by physicians. The clinical questions belong to two categories of the five-leaf high-level hierarchical Evidence Taxonomy created by Ely and his colleagues: Intervention and Non Intervention. The questions are passed to two search engines (PubMed, Google), two question-answering systems (MedQA, Answers.com's Brain-Boost), and a dictionary (OneLook) for locating the answers to the question corpus. The output of the systems is judged by a human and scored according to the Mean Reciprocal Rank (MRR). The results show the need for question modification and analyse the impact of specific types of modifications. The results also show that No Intervention questions are easier to answer than Intervention questions. Further, generic search engines like Google obtain higher MRR than specialised systems and even higher than a version of Google based on specialised literature (PubMed) only. In addition, an analysis of the location of the answer in the returned documents is provided.