COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Active learning using pre-clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Active learning via transductive experimental design
ICML '06 Proceedings of the 23rd international conference on Machine learning
ML-KNN: A lazy learning approach to multi-label learning
Pattern Recognition
trNon-greedy active learning for text categorization using convex ansductive experimental design
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
The Pascal Visual Object Classes (VOC) Challenge
International Journal of Computer Vision
COLT'07 Proceedings of the 20th annual conference on Learning theory
Multi-label linear discriminant analysis
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
Initialization Independent Clustering With Actively Self-Training Method
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
Labeling training data is quite time-consuming but essential for supervised learning models. To solve this problem, the active learning has been studied and applied to select the informative and representative data points for labeling. However, during the early stage of experiments, only a small number (or none) of labeled data points exist, thus the most representative samples should be selected first. In this paper, we propose a novel robust active learning method to handle the early stage experimental design problem and select the most representative data points. Selecting the representative samples is an NP-hard problem, thus we employ the structured sparsity-inducing norm to relax the objective to an efficient convex formulation. Meanwhile, the robust sparse representation loss function is utilized to reduce the effect of outliers. A new efficient optimization algorithm is introduced to solve our non-smooth objective with low computational cost and proved global convergence. Empirical results on both single-label and multi-label classification benchmark data sets show the promising results of our method.