Biomedical image retrieval using multimodal context and concept feature spaces

  • Authors:
  • Md. Mahmudur Rahman;Sameer K. Antani;Dina Demner Fushman;George R. Thoma

  • Affiliations:
  • U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD;U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD;U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD;U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD

  • Venue:
  • MCBR-CDS'11 Proceedings of the Second MICCAI international conference on Medical Content-Based Retrieval for Clinical Decision Support
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a unified medical image retrieval method that integrates visual features and text keywords using multimodal classification and filtering. For content-based image search, concepts derived from visual features are modeled using support vector machine (SVM)-based classification of local patches from local image regions. Text keywords from associated metadata provides the context and are indexed using the vector space model of information retrieval. The concept and context vectors are combined and trained for SVM classification at a global level for image modality (e.g., CT, MR, x-ray, etc.) detection. In this method, the probabilistic outputs from the modality categorization are used to filter images so that the search can be performed only on a candidate subset. An evaluation of the method on ImageCLEFmed 2010 dataset of 77,000 images, XML annotations and topics results in a mean average precision (MAP) score of 0.1125. It demonstrates the effectiveness and efficiency of the proposed multimodal framework compared to using only a single modality or without using any classification information.