Online multi-label active annotation: towards large-scale content-based video search
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Learning semantic distance from community-tagged media collection
MM '09 Proceedings of the 17th ACM international conference on Multimedia
NUS-WIDE: a real-world web image database from National University of Singapore
Proceedings of the ACM International Conference on Image and Video Retrieval
Active learning in multimedia annotation and retrieval: A survey
ACM Transactions on Intelligent Systems and Technology (TIST)
Active learning methods for electrocardiographic signal classification
IEEE Transactions on Information Technology in Biomedicine
Inconsistency-based active learning for support vector machines
Pattern Recognition
On active learning in hierarchical classification
Proceedings of the 21st ACM international conference on Information and knowledge management
Literature survey of active learning in multimedia annotation and retrieval
Proceedings of the Fifth International Conference on Internet Multimedia Computing and Service
Active learning with multi-label SVM classification
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.14 |
Conventional active learning dynamically constructs the training set only along the sample dimension. While this is the right strategy in binary classification, it is suboptimal for multilabel image classification. We argue that for each selected sample, only some effective labels need to be annotated while others can be inferred by exploring the label correlations. The reason is that the contributions of different labels to minimizing the classification error are different due to the inherent label correlations. To this end, we propose to select sample-label pairs, rather than only samples, to minimize a multilabel Bayesian classification error bound. We call it two-dimensional active learning because it considers both the sample dimension and the label dimension. Furthermore, as the number of training samples increases rapidly over time due to active learning, it becomes intractable for the offline learner to retrain a new model on the whole training set. So we develop an efficient online learner to adapt the existing model with the new one by minimizing their model distance under a set of multilabel constraints. The effectiveness and efficiency of the proposed method are evaluated on two benchmark data sets and a realistic image collection from a real-world image sharing Web site—Corbis.