Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Discriminative Training for Object Recognition Using Image Patches
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Grayscale medical image annotation using local relational features
Pattern Recognition Letters
Combining CBIR and NLP for multilingual terminology alignment and cross-language image indexing
YIWCALA '10 Proceedings of the NAACL HLT 2010 Young Investigators Workshop on Computational Approaches to Languages of the Americas
Medical image classification with multiple kernel learning
ICIMCS '10 Proceedings of the Second International Conference on Internet Multimedia Computing and Service
Hi-index | 0.00 |
In this paper, we present the algorithms and results of our participation in the medical image annotation and retrieval tasks of ImageCLEFmed 2006. We exploit using global features and local features to describe medical images in the annotation task. Different kinds of global features are examined and the most descriptive ones are extracted to represent the radiographs, which effectively capture the intensity, texture and shape characters of the image content. We also evaluate the descriptive power of local features, i.e. local image patches, for medical images. A newly developed spatial pyramid matching algorithm is applied to measure the similarity between images represented by sets of local features. Both descriptors use multi-class SVM to classify the images. We achieve an error rate of 17.6% for global descriptor and 18.2% for the local one, which rank sixth and ninth respectively among all the submissions. For the medical image retrieval task, we only use visual features to describe the images. No textual information is considered. Different features are used to describe gray images and color images. Our submission achieves a mean average precision (MAP) of 0.0681, which ranks second in the 11 runs that also only use visual features.