Multi-modal query expansion based on local analysis for medical image retrieval

  • Authors:
  • Md. Mahmudur Rahman;Sameer K. Antani;Rodney L. Long;Dina Demner-Fushman;George R. Thoma

  • Affiliations:
  • U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD;U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD;U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD;U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD;U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD

  • Venue:
  • MCBR-CDS'09 Proceedings of the First MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A unified medical image retrieval framework integrating visual and text keywords using a novel multi-modal query expansion (QE) is presented. For the content-based image search, visual keywords are modeled using support vector machine (SVM)-based classification of local color and texture patches from image regions. For the text-based search, keywords from the associated annotations are extracted and indexed. The correlations between the keywords in both the visual and text feature spaces are analyzed for QE by considering local feedback information. The QE approach can propagate user perceived semantics from one modality to another and improve retrieval effectiveness when combined in multi-modal search. An evaluation of the method on imageCLEFmed'08 dataset and topics results in a mean average precision (MAP) score of 0.15 over comparable searches without QE or using only single modality.