Photobook: content-based manipulation of image databases
International Journal of Computer Vision
Use of the Hough transformation to detect lines and curves in pictures
Communications of the ACM
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
The Journal of Machine Learning Research
Video Google: A Text Retrieval Approach to Object Matching in Videos
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Scalable Recognition with a Vocabulary Tree
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
ASIFT: A New Framework for Fully Affine Invariant Image Comparison
SIAM Journal on Imaging Sciences
A new content-based image retrieval approach based on pattern orientation histogram
MIRAGE'07 Proceedings of the 3rd international conference on Computer vision/computer graphics collaboration techniques
Face segmentation using skin-color map in videophone applications
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
Modality is a key facet in medical image retrieval, as a user is likely interested in only one of e.g. radiology images, flowcharts, and pathology photos. While assessing image modality is trivial for humans, reliable automatic methods are required to deal with large un-annotated image bases, such as figures taken from the millions of scientific publications. We present a multi-disciplinary approach to tackle the classification problem by combining image features, meta-data, textual and referential information. We test our system's accuracy on the Image- CLEF 2011 medical modality classification data set. We show that using a fully affine-invariant feature descriptor and sparse coding on these descriptors in the Bag-of-Words image representation significantly increases the classification accuracy. Our best method achieves 87.89 and outperforms the state of the art.