Automatic medical image annotation and retrieval

  • Authors:
  • Jian Yao;Zhongfei (Mark) Zhang;Sameer Antani;Rodney Long;George Thoma

  • Affiliations:
  • Department of Computer Science, State University of New York at Binghamton, Binghamton, NY 13902, USA;Department of Computer Science, State University of New York at Binghamton, Binghamton, NY 13902, USA;National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA;National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA;National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA

  • Venue:
  • Neurocomputing
  • Year:
  • 2008

Quantified Score

Hi-index 0.01

Visualization

Abstract

The demand for automatically annotating and retrieving medical images is growing faster than ever. In this paper, we present a novel medical image retrieval method for a special medical image retrieval problem where the images in the retrieval database can be annotated into one of the pre-defined labels. Even more, a user may query the database with an image that is close to but not exactly what he/she expects. The retrieval consists of the deducible retrieval and the traditional retrieval. The deducible retrieval is a special semantic retrieval and is to retrieve the label that a user expects while the traditional retrieval is to retrieve the images in the database which belong to this label and are most similar to the query image in appearance. The deducible retrieval is achieved using SEMI-supervised Semantic Error-Correcting output Codes (SEMI-SECC). The active learning method is also exploited to further reduce the number of the required ground truthed training images. Relevance feedbacks (RFs) are used in both retrieval steps: in the deducible retrieval, RF acts as a short-term memory feedback and helps identify the label that a user expects; in the traditional retrieval, RF acts as a long-term memory feedback and helps ground truth the unlabelled training images in the database. The experimental results on IMAGECLEF 2005 [] annotation data set clearly show the strength and the promise of the presented methods.