COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
Improving retrieval performance by relevance feedback
Readings in information retrieval
Support vector machine active learning for image retrieval
MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
Machine Learning
Machine Learning
International Journal of Computer Vision - Special Issue on Content-Based Image Retrieval
On the detection of semantic concepts at TRECVID
Proceedings of the 12th annual ACM international conference on Multimedia
A comparison of active classification methods for content-based image retrieval
Proceedings of the 1st international workshop on Computer vision meets databases
Proceedings of the 13th annual ACM international conference on Multimedia
Learning rich semantics from news video archives by style analysis
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Video Annotation by Active Learning and Cluster Tuning
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Large-Scale Concept Ontology for Multimedia
IEEE MultiMedia
Evaluation campaigns and TRECVid
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Online multi-label active annotation: towards large-scale content-based video search
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Foundations and Trends in Information Retrieval
Can social tagged images aid concept-based video search?
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Relevance filtering meets active learning: improving web-based concept detectors
Proceedings of the international conference on Multimedia information retrieval
Video corpus annotation using active learning
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
Incremental multi-classifier learning algorithm on grid'5000 for large scale image annotation
Proceedings of the international workshop on Very-large-scale multimedia corpus, mining and retrieval
Incremental multiple classifier active learning for concept indexing in images and videos
MMM'11 Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part I
Active learning with semi-automatic annotation for extractive speech summarization
ACM Transactions on Speech and Language Processing (TSLP)
Active cleaning for video corpus annotation
MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
Simulating the future of concept-based video retrieval under improved detector performance
Multimedia Tools and Applications
Active learning with multiple classifiers for multimedia indexing
Multimedia Tools and Applications
Exploring label dependency in active learning for phenotype mapping
BioNLP '12 Proceedings of the 2012 Workshop on Biomedical Natural Language Processing
Annotating images with suggestions: user study of a tagging system
ACIVS'12 Proceedings of the 14th international conference on Advanced Concepts for Intelligent Vision Systems
Narrative theme navigation for sitcoms supported by fan-generated scripts
Multimedia Tools and Applications
Tag completion based on belief theory and neighbor voting
Proceedings of the 3rd ACM conference on International conference on multimedia retrieval
Literature survey of active learning in multimedia annotation and retrieval
Proceedings of the Fifth International Conference on Internet Multimedia Computing and Service
Hi-index | 0.00 |
In this paper, we compare active learning strategies for indexing concepts in video shots. Active learning is simulated using subsets of a fully annotated data set instead of actually calling for user intervention. Training is done using the collaborative annotation of 39 concepts of the TRECVID 2005 campaign. Performance is measured on the 20 concepts selected for the TRECVID 2006 concept detection task. The simulation allows exploring the effect of several parameters: the strategy, the annotated fraction of the data set, the size of the data set, the number of iterations and the relative difficulty of concepts. Three strategies were compared. The first two, respectively, select the most probable and the most uncertain samples. The third one is a random choice. For easy concepts, the ''most probable'' strategy is the best one when less than 15% of the data set is annotated and the ''most uncertain'' strategy is the best one when 15% or more of the data set is annotated. The ''most probable'' and ''most uncertain'' strategies are roughly equivalent for moderately difficult and difficult concepts. In all cases, the maximum performance is reached when 12-15% of the whole data set is annotated. This result is, however, dependent upon the step size and the training set size. One-fortieth of the training set size is a good value for the step size. The size of the subset of the training set that has to be annotated in order to reach the maximum achievable performance varies with the square root of the training set size. The ''most probable'' strategy is more ''recall oriented'' and the ''most uncertain'' strategy is more ''precision oriented''.