COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
Toward Optimal Active Learning through Sampling Estimation of Error Reduction
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Query Learning Strategies Using Boosting and Bagging
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Less is More: Active Learning with Support Vector Machines
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Employing EM and Pool-Based Active Learning for Text Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Support vector machine active learning with applications to text classification
The Journal of Machine Learning Research
Active learning using pre-clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Diverse ensembles for active learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Active learning with multiple views
Journal of Artificial Intelligence Research
Active learning with statistical models
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
In many learning tasks, to obtain labeled instances is hard due to heavy cost while unlabeled instances can be easily collected. Active learners can significantly reduce labeling cost by only selecting the most informative instances for labeling. Graph-based learning methods are popular in machine learning in recent years because of clear mathematical framework and strong performance with suitable models. However, they suffer heavy computation when the whole graph is in huge size. In this paper, we propose a scalable algorithm for graph-based active learning. The proposed method can be described as follows. In the beginning, a backbone graph is constructed instead of the whole graph. Then the instances in the backbone graph are chosen for labeling. Finally, the instances with the maximum expected information gain are sampled repeatedly based on the graph regularization model. The experiments show that the proposed method obtains smaller data utilization and average deficiency than other popular active learners on selected datasets from semi-supervised learning benchmarks.