A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
Some label efficient learning results
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Improving retrieval performance by relevance feedback
Readings in information retrieval
A Web marketing system with automatic pricing
Proceedings of the 9th international World Wide Web conference on Computer networks : the international journal of computer and telecommunications netowrking
Information and Computation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Toward Optimal Active Learning through Sampling Estimation of Error Reduction
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Less is More: Active Learning with Support Vector Machines
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Worst-Case Analysis of Selective Sampling for Linear Classification
The Journal of Machine Learning Research
Relaxed online SVMs for spam filtering
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
The impact of latency on online classification learning with concept drift
KSEM'10 Proceedings of the 4th international conference on Knowledge science, engineering and management
Hi-index | 0.00 |
In many data mining applications, online labeling feedback is only available for examples which were predicted to belong to the positive class. Such applications includespam filtering in the case where users never checkemails marked "spam", document retrieval where users cannotgive relevance feedback on unretrieved documents,and online advertising where user behavior cannot beobserved for unshown advertisements. One-sided feedback can cripple the performance of classical mistake-driven online learners such as Perceptron. Previous work under the Apple Tasting framework showed how to transform standard online learners into successful learners from one sided feedback. However, we find in practice that this transformation may request more labels than necessary to achieve strong performance. In this paper,we employ two active learning methods which reduce the number of labels requested in practice. One method is the use of Label Efficient active learning. The other method,somewhat surprisingly, is the use of margin-based learners without modification, which we show combines implicit active learning and a greedy strategy to managing the exploration exploitation tradeoff. Experimental results show that these methods can be significantly more effective in practice than those using the Apple Tasting transformation, even on minority class problems.