Cross-modal interaction and integration with relevance feedback for medical image retrieval

  • Authors:
  • Md. Mahmudur Rahman;Varun Sood;Bipin C. Desai;Prabir Bhattacharya

  • Affiliations:
  • Dept. of Computer Science & Software Engineering, Concordia University, Canada;Dept. of Computer Science & Software Engineering, Concordia University, Canada;Dept. of Computer Science & Software Engineering, Concordia University, Canada;Institute for Information Systems Engineering, Concordia University, Canada

  • Venue:
  • MMM'07 Proceedings of the 13th international conference on Multimedia Modeling - Volume Part I
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a cross-modal approach of image retrieval from a medical image collection which integrates visual information based on purely low-level image contents and case related textual information from the annotated XML files. The advantages of both the modalities are exploited by involving the users in the retrieval loop. For content-based search, low-level visual features are extracted in vector form at different image representations. For text-based search, keywords from the annotated files are extracted and indexed by employing the vector space model of information retrieval. Based on the relevance feedback, textual and visual query refinements are performed and user's perceived semantics are propagated from one modality to another. Finally, the most similar images are obtained by a linear combination of similarity matching and re-ordering in a pre-filtered image set. The experiments are performed on a collection of diverse medical images with case-based annotation of each image by experts. It demonstrates the flexibility and the effectiveness of the proposed approach compared to using only a single modality or without any feedback information.