A study on the use of search engines for answering clinical questions

  • Authors:
  • Andreea Tutos;Diego Mollá

  • Affiliations:
  • Macquarie University, Sydney, NSW;Macquarie University, Sydney, NSW

  • Venue:
  • HIKM '10 Proceedings of the Fourth Australasian Workshop on Health Informatics and Knowledge Management - Volume 108
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes an evaluation of the answerability of a set of clinical questions posed by physicians. The clinical questions belong to two categories of the five-leaf high-level hierarchical Evidence Taxonomy created by Ely and his colleagues: Intervention and Non Intervention. The questions are passed to two search engines (PubMed, Google), two question-answering systems (MedQA, Answers.com's Brain-Boost), and a dictionary (OneLook) for locating the answers to the question corpus. The output of the systems is judged by a human and scored according to the Mean Reciprocal Rank (MRR). The results show the need for question modification and analyse the impact of specific types of modifications. The results also show that No Intervention questions are easier to answer than Intervention questions. Further, generic search engines like Google obtain higher MRR than specialised systems and even higher than a version of Google based on specialised literature (PubMed) only. In addition, an analysis of the location of the answer in the returned documents is provided.