Cross-domain video concept detection using adaptive svms
Proceedings of the 15th international conference on Multimedia
Semantic context transfer across heterogeneous sources for domain adaptive video search
MM '09 Proceedings of the 17th ACM international conference on Multimedia
NUS-WIDE: a real-world web image database from National University of Singapore
Proceedings of the ACM International Conference on Image and Video Retrieval
OPTIMOL: Automatic Online Picture Collection via Incremental Model Learning
International Journal of Computer Vision
On the sampling of web images for learning visual concept classifiers
Proceedings of the ACM International Conference on Image and Video Retrieval
Probabilistic visual concept trees
Proceedings of the international conference on Multimedia
Semantic label sharing for learning with many categories
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Social negative bootstrapping for visual categorization
Proceedings of the 1st ACM International Conference on Multimedia Retrieval
Representations of Keypoint-Based Semantic Concept Detection: A Comprehensive Study
IEEE Transactions on Multimedia
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part I
Hi-index | 0.00 |
A common obstacle in effective learning of visual concept classifiers is the scarcity of positive training examples due to expensive labeling cost. This paper explores the sampling of weakly tagged web images for concept learning without human assistance. In particular, ontology knowledge is incorporated for semantic pooling of positive examples from ontologically neighboring concepts. This effectively widens the coverage of the positive samples with visually more diversified content, which is important for learning a good concept classifier. We experiment with two learning strategies: aggregate and incremental. The former strategy re-trains a new classifier by combining existing and newly collected examples, while the latter updates the existing model using the new samples incrementally. Extensive experiments on NUS-WIDE and VOC 2010 datasets show very encouraging results, even when comparing with classifiers learnt using expert labeled training examples.