COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Toward Optimal Active Learning through Sampling Estimation of Error Reduction
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Employing EM and Pool-Based Active Learning for Text Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Active learning using pre-clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
International Journal of Computer Applications in Technology
Hybrid dynamic k-nearest-neighbour and distance and attribute weighted method for classification
International Journal of Computer Applications in Technology
Particle swarm optimisation-based support vector machine for intelligent fault diagnosis
International Journal of Computer Applications in Technology
Hi-index | 0.00 |
In this paper, we first present active Averaged One-Dependence Estimator AODE learning classification model, which can improve the performance of AODE by selecting and asking experts to label the samples only with maximum information. Several common sampling strategies for active learning are discussed. Unfortunately, these methods can get the outlier, which will lead to scale up the classification-reduced error and high complexity. Motivated by those analyses, we propose a new active learning strategy, which is based on the uncertainty sampling and classification accuracy loss sampling strategy. Experimental results on three UCI standard data sets and a real remote sensing data set show that the AODE classification model and our novel active learning strategy can get better classification accuracy with fewer labelled samples than that of the state-of-the-art approaches for active learning.