An integrated approach for medical image retrieval through combining textual and visual features

  • Authors:
  • Zheng Ye;Xiangji Huang;Qinmin Hu;Hongfei Lin

  • Affiliations:
  • Department of Computer Science and Engineering, Dalian University of Technology, Dalian, Liaoning, China and Information Retrieval and Knowledge Managment Lab, York University, Toronto, Canada;Information Retrieval and Knowledge Managment Lab, York University, Toronto, Canada;Information Retrieval and Knowledge Managment Lab, York University, Toronto, Canada;Department of Computer Science and Engineering, Dalian University of Technology, Dalian, Liaoning, China

  • Venue:
  • CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present an empirical study for monolingual medical image retrieval. In particular, we present a series of experiments in ImageCLEFmed 2009 task. There are three main goals. First, we evaluate traditional well-known weighting models in the text retrieval domain, such as BM25, TFIDF and Language Model (LM), for context-based image retrieval. Second, we evaluate statistical-based feedback models and ontology-based feedback models. Third, we investigate how content-based image retrieval can be integrated with these two basic technologies in traditional text retrieval domain. The experimental results have shown that: 1) traditional weighting models work well in context-based medical image retrieval task especially when the parameters are tuned properly; 2) statistical-based feedback models can further improve the retrieval performance when a small number of documents are used for feedback; however, the medical image retrieval can not benefit from ontology-based query expansion method used in this paper; 3) the retrieval performance can be slightly boosted via an integrated retrieval approach.