A semantic fusion approach between medical images and reports using UMLS

  • Authors:
  • Daniel Racoceanu;Caroline Lacoste;Roxana Teodorescu;Nicolas Vuillemenot

  • Affiliations:
  • IPAL-Image Perception, Access and Language – UMI-CNRS 2955, Institute for Infocomm Research, A*STAR, Singapore;IPAL-Image Perception, Access and Language – UMI-CNRS 2955, Institute for Infocomm Research, A*STAR, Singapore;IPAL-Image Perception, Access and Language – UMI-CNRS 2955, Institute for Infocomm Research, A*STAR, Singapore;IPAL-Image Perception, Access and Language – UMI-CNRS 2955, Institute for Infocomm Research, A*STAR, Singapore

  • Venue:
  • AIRS'06 Proceedings of the Third Asia conference on Information Retrieval Technology
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the main challenges in content-based image retrieval still remains to bridge the gap between low-level features and semantic information. In this paper, we present our first results concerning a medical image retrieval approach using a semantic medical image and report indexing within a fusion framework, based on the Unified Medical Language System (UMLS) metathesaurus. We propose a structured learning framework based on Support Vector Machines to facilitate modular design and extract medical semantics from images. We developed two complementary visual indexing approaches within this framework: a global indexing to access image modality, and a local indexing to access semantic local features. Visual indexes and textual indexes – extracted from medical reports using MetaMap software application – constitute the input of the late fusion module. A weighted vectorial norm fusion algorithm allows the retrieval system to increase its meaningfulness, efficiency and robustness. First results on the CLEF medical database are presented. The important perspectives of this approach in terms of semantic query expansion and data-mining are discussed.