International Journal of Computer Vision
Negotiating the Semantic Gap: From Feature Maps to Semantic Landscapes
SOFSEM '01 Proceedings of the 28th Conference on Current Trends in Theory and Practice of Informatics Piestany: Theory and Practice of Informatics
Technical Symbols Recognition Using the Two-Dimensional Radon Transform
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 3 - Volume 3
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
The Journal of Machine Learning Research
Combining Visual Features with Semantics for a More Effective Image Retrieval
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Image annotation refinement using random walk with restarts
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Bipartite graph reinforcement model for web image annotation
Proceedings of the 15th international conference on Multimedia
A computational model for causal and diagnostic reasoning in inference systems
IJCAI'83 Proceedings of the Eighth international joint conference on Artificial intelligence - Volume 1
Multiple Bernoulli relevance models for image and video annotation
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Hi-index | 0.00 |
In many vision problems, instead of having fully annotated training data, it is easier to obtain just a subset of data with annotations, because it is less restrictive for the user. For this reason, in this paper, we consider especially the problem of classifying weakly-annotated images, where just a small subset of the database is annotated with keywords. In this paper we present and evaluate a new method which improves the effectiveness of content-based image classification, by integrating semantic concepts extracted from text, and by automatically extending annotations to the images with missing keywords. Our model is inspired from the probabilistic graphical model theory: we propose a hierarchical mixture model which enables to handle missing values. Results of visual-textual classification, reported on a database of images collected from the Web, partially and manually annotated, show an improvement by 32.3% in terms of recognition rate against only visual information classification. Besides the automatic annotation extension with our model for images with missing keywords outperforms the visual-textual classification by 6.8%. Finally the proposed method is experimentally competitive with the state-of-art classifiers.