Predicting modality from text queries for medical image retrieval

  • Authors:
  • Pierre Tirilly;Kun Lu;Xiangming Mu

  • Affiliations:
  • University of Wisconsin-Milwaukee, Milwaukee, WI, USA;University of Wisconsin-Milwaukee, Milwaukee, WI, USA;University of Wisconsin-Milwaukee, Milwaukee, WI, USA

  • Venue:
  • MMAR '11 Proceedings of the 2011 international ACM workshop on Medical multimedia analysis and retrieval
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years, attention has been raised on the use of image modality in medical image retrieval. Several methods have been developed to automatically identify the modality of images and integrate this information to image retrieval systems. Results show that using the modality can significantly improve the performance of these systems. However, doing so also requires to identify the modality expressed in the queries. This task is usually performed by elementary pattern matching techniques that can be applied only to a small proportion of queries. This paper addresses the issue of predicting the modality expressed in the queries in a general way. First, a taxonomy of queries and the specificities of the problem are described. Then, a Bayesian classifier is proposed to automatically predict the modality expressed in the queries, as well as two models to integrate these prediction to an image retrieval system. Experiments performed on data from the ImageCLEFMed 2009 and 2010 challenges show that our approach can outperform current systems in precision, although the performance can differ significantly from one query to another.