Derivational Analogy in PRODIGY: Automating Case Acquisition, Storage, and Utilization
Machine Learning - Special issue on case-based reasoning
Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the VC Dimension
Machine Learning - Special issue on computational learning theory
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
A Bayesian/Information Theoretic Model of Learning to Learn viaMultiple Task Sampling
Machine Learning - Special issue on inductive transfer
Machine Learning - Special issue on inductive transfer
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
Casebased Learning
Query Learning with Large Margin Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Employing EM and Pool-Based Active Learning for Text Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Selective transfer of neural network task knowledge
Selective transfer of neural network task knowledge
Support vector machine active learning with applications to text classification
The Journal of Machine Learning Research
Online Choice of Active Learning Algorithms
The Journal of Machine Learning Research
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Active learning using pre-clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
A bound on the label complexity of agnostic active learning
Proceedings of the 24th international conference on Machine learning
ECML '07 Proceedings of the 18th European conference on Machine Learning
Journal of Computer and System Sciences
Importance weighted active learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Analysis of Perceptron-Based Active Learning
The Journal of Machine Learning Research
A model of inductive bias learning
Journal of Artificial Intelligence Research
Theoretical foundations of active learning
Theoretical foundations of active learning
COLT'07 Proceedings of the 20th annual conference on Learning theory
Teaching dimension and the complexity of active learning
COLT'07 Proceedings of the 20th annual conference on Learning theory
Rademacher Complexities and Bounding the Excess Risk in Active Learning
The Journal of Machine Learning Research
Active learning in the non-realizable case
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Minimax Bounds for Active Learning
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We explore a transfer learning setting, in which a finite sequence of target concepts are sampled independently with an unknown distribution from a known family. We study the total number of labeled examples required to learn all targets to an arbitrary specified expected accuracy, focusing on the asymptotics in the number of tasks and the desired accuracy. Our primary interest is formally understanding the fundamental benefits of transfer learning, compared to learning each target independently from the others. Our approach to the transfer problem is general, in the sense that it can be used with a variety of learning protocols. As a particularly interesting application, we study in detail the benefits of transfer for self-verifying active learning; in this setting, we find that the number of labeled examples required for learning with transfer is often significantly smaller than that required for learning each target independently.