Bi-modal conceptual indexing for medical image retrieval

  • Authors:
  • Joo-Hwee Lim;Jean-Pierre Chevallet;Diem Thi Hoang Le;Hanlin Goh

  • Affiliations:
  • French-Singapore IPAL Joint Lab, UMI CNRS, NUS, UJF, Institute for Infocomm Research, Singapore;French-Singapore IPAL Joint Lab, UMI CNRS, NUS, UJF, Institute for Infocomm Research, Singapore;French-Singapore IPAL Joint Lab, UMI CNRS, NUS, UJF, Institute for Infocomm Research, Singapore;French-Singapore IPAL Joint Lab, UMI CNRS, NUS, UJF, Institute for Infocomm Research, Singapore

  • Venue:
  • MMM'08 Proceedings of the 14th international conference on Advances in multimedia modeling
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

To facilitate the automatic indexing and retrieval of large medical image databases, images and associated texts are indexed using concepts from the Unified Medical Language System (UMLS) metathesaurus. We propose a structured learning framework for learning medical semantics from images. Two complementary global and local visual indexing approaches are presented. Two fusion approaches are also used to improve textual retrieval using the UMLS-based image indexing: a simple post-query fusion and a visual modality filtering to remove visually aberrant images according to the query modality concepts. Using the ImageCLEFmed database, we demonstrate that our framework is superior when compared to the automatic runs evaluated in 2005 on the same Medical Image Retrieval task.