Medical image annotation and retrieval using visual features

  • Authors:
  • Jing Liu;Yang Hiu;Mingjing Li;Songde Ma;Wei-Ying Ma

  • Affiliations:
  • Institute of Automation, Chinese Academy of Sciences, Beijing, China;University of Science and Technology of China, Hefei, China;Microsoft Research Asia, Beijing, China;Institute of Automation, Chinese Academy of Sciences, Beijing, China;Microsoft Research Asia, Beijing, China

  • Venue:
  • CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present the algorithms and results of our participation in the medical image annotation and retrieval tasks of ImageCLEFmed 2006. We exploit using global features and local features to describe medical images in the annotation task. Different kinds of global features are examined and the most descriptive ones are extracted to represent the radiographs, which effectively capture the intensity, texture and shape characters of the image content. We also evaluate the descriptive power of local features, i.e. local image patches, for medical images. A newly developed spatial pyramid matching algorithm is applied to measure the similarity between images represented by sets of local features. Both descriptors use multi-class SVM to classify the images. We achieve an error rate of 17.6% for global descriptor and 18.2% for the local one, which rank sixth and ninth respectively among all the submissions. For the medical image retrieval task, we only use visual features to describe the images. No textual information is considered. Different features are used to describe gray images and color images. Our submission achieves a mean average precision (MAP) of 0.0681, which ranks second in the 11 runs that also only use visual features.