A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Toward Optimal Active Learning through Sampling Estimation of Error Reduction
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Query Learning Strategies Using Boosting and Bagging
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Less is More: Active Learning with Support Vector Machines
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Query Learning with Large Margin Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Diverse ensembles for active learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Active learning for class probability estimation and ranking
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Active learning in the non-realizable case
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Multi-class ensemble-based active learning
ECML'06 Proceedings of the 17th European conference on Machine Learning
Proactive learning: cost-sensitive active learning with multiple imperfect oracles
Proceedings of the 17th ACM conference on Information and knowledge management
Confidence-based stopping criteria for active learning for data annotation
ACM Transactions on Speech and Language Processing (TSLP)
Multi-criteria service selection with optimal stopping in dynamic service-oriented systems
ICDCIT'10 Proceedings of the 6th international conference on Distributed Computing and Internet Technology
Hi-index | 0.00 |
Supervised learning deals with the inference of a distribution over an output or label space Y conditioned on points in an observation space X, given a training dataset D of pairs in X × Y. However, in a lot of applications of interest, acquisition of large amounts of observations is easy, while the process of generating labels is time-consuming or costly. One way to deal with this problem is active learning, where points to be labelled are selected with the aim of creating a model with better performance than that of an model trained on an equal number of randomly sampled points. In this paper, we instead propose to deal with the labelling cost directly: The learning goal is defined as the minimisation of a cost which is a function of the expected model performance and the total cost of the labels used. This allows the development of general strategies and specific algorithms for (a) optimal stopping, where the expected cost dictates whether label acquisition should continue (b) empirical evaluation, where the cost is used as a performance metric for a given combination of inference, stopping and sampling methods. Though the main focus of the paper is optimal stopping, we also aim to provide the background for further developments and discussion in the related field of active learning.