Biomedical imaging modality classification using combined visual features and textual terms

  • Authors:
  • Xian-Hua Han;Yen-Wei Chen

  • Affiliations:
  • College of Information Science and Engineering, Ritsumeikan University, Kusatsu-Shi, Japan;College of Information Science and Engineering, Ritsumeikan University, Kusatsu-Shi, Japan and College of Information Sciences and Technology, The Pennsylvania State University, University Park, P ...

  • Venue:
  • Journal of Biomedical Imaging - Special issue on Machine Learning in Medical Imaging
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe an approach for the automatic modality classification inmedical image retrieval task of the 2010 CLEF cross-language image retrieval campaign (ImageCLEF). This paper is focused on the process of feature extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easymisclassifiedmodality pairs such as CT and MR or PET andNMmodalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.