Combining textual and visual features for cross-language medical image retrieval

  • Authors:
  • Pei-Cheng Cheng;Been-Chian Chien;Hao-Ren Ke;Wei-Pang Yang

  • Affiliations:
  • Department of Computer & Information Science, National Chiao Tung University, Hsinchu, Taiwan, R.O.C.;Department of Computer Science and Information Engineering, National University of Tainan, Tainan, Taiwan, R.O.C.;Library and Institute of Information Management, National Chiao Tung University, Hsinchu, Taiwan, R.O.C.;Department of Computer & Information Science, National Chiao Tung University, Hsinchu, Taiwan, R.O.C.

  • Venue:
  • CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we describe the technologies and experimental results for the medical retrieval task and automatic annotation task. We combine textual and content-based approaches to retrieve relevant medical images. The content-based approach containing four image features and the text-based approach using word expansion are developed to accomplish these tasks. Experimental results show that combining both the content-based and text-based approaches is better than using only one approach. In the automatic annotation task we use Support Vector Machines (SVM) to learn image feature characteristics for assisting the task of image classification. Based on the SVM model, we analyze which image feature is more promising in medical image retrieval. The results show that the spatial relationship between pixels is an important feature in medical image data because medical image data always has similar anatomic regions. Therefore, image features emphasizing spatial relationship have better results than others.