Support vector machine active learning for image retrieval
MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition
Negative pseudo-relevance feedback in content-based video retrieval
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
On the detection of semantic concepts at TRECVID
Proceedings of the 12th annual ACM international conference on Multimedia
Semi-Supervised Cross Feature Learning for Semantic Concept Detection in Videos
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
To search or to label?: predicting the performance of search-based automatic image classifiers
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
Why we tag: motivations for annotation in mobile and online media
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Towards optimal bag-of-features for object categorization and semantic video retrieval
Proceedings of the 6th ACM international conference on Image and video retrieval
Can all tags be used for search?
Proceedings of the 17th ACM conference on Information and knowledge management
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Proceedings of the 18th international conference on World wide web
Inferring semantic concepts from community-contributed images and noisy tags
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Visual categorization with negative examples for free
MM '09 Proceedings of the 17th ACM international conference on Multimedia
NUS-WIDE: a real-world web image database from National University of Singapore
Proceedings of the ACM International Conference on Image and Video Retrieval
Can social tagged images aid concept-based video search?
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Learning social tag relevance by neighbor voting
IEEE Transactions on Multimedia
OPTIMOL: Automatic Online Picture Collection via Incremental Model Learning
International Journal of Computer Vision
Learning automatic concept detectors from online video
Computer Vision and Image Understanding
Representations of Keypoint-Based Semantic Concept Detection: A Comprehensive Study
IEEE Transactions on Multimedia
Social negative bootstrapping for visual categorization
Proceedings of the 1st ACM International Conference on Multimedia Retrieval
Fusing heterogeneous modalities for video and image re-ranking
Proceedings of the 1st ACM International Conference on Multimedia Retrieval
Proceedings of the 1st ACM International Conference on Multimedia Retrieval
On the pooling of positive examples with ontology for visual concept learning
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Fusing concept detection and geo context for visual search
Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
Automatic annotation of tagged content using predefined semantic concepts
Proceedings of the 18th Brazilian symposium on Multimedia and the web
Gathering training sample automatically for social event visual modeling
Proceedings of the 2012 international workshop on Socially-aware multimedia
Hi-index | 0.00 |
Visual concept learning often requires a large set of training images. In practice, nevertheless, acquiring noise-free training labels with sufficient positive examples is always expensive. A plausible solution for training data collection is by sampling the largely available user-tagged images from social media websites. With the general belief that the probability of correct tagging is higher than that of incorrect tagging, such a solution often sounds feasible, though is not without challenges. First, user-tags can be subjective and, to certain extent, are ambiguous. For instance, an image tagged with "whales" may be simply a picture about ocean museum. Learning concept "whales" with such training samples will not be effective. Second, user-tags can be overly abbreviated. For instance, an image about concept "wedding" may be tagged with "love" or simply the couple's names. As a result, crawling sufficient positive training examples is difficult. This paper empirically studies the impact of exploiting the tagged images towards concept learning, investigating the issue of how the quality of pseudo training images affects concept detection performance. In addition, we propose a simple approach, named semantic field, for predicting the relevance between a target concept and the tag list associated with the images. Specifically, the relevance is determined through concept-tag co-occurrence by exploring external sources such as WordNet and Wikipedia. The proposed approach is shown to be effective in selecting pseudo training examples, exhibiting better performance in concept learning than other approaches such as those based on keyword sampling and tag voting.