Machine Learning - Special issue on inductive transfer
Learning to learn
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Algorithmic Stability and Meta-Learning
The Journal of Machine Learning Research
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Statistical properties of kernel principal component analysis
Machine Learning
Bounds for Linear Multi-Task Learning
The Journal of Machine Learning Research
A model of inductive bias learning
Journal of Artificial Intelligence Research
On the eigenspectrum of the gram matrix and the generalization error of kernel-PCA
IEEE Transactions on Information Theory
Multitask Sparsity via Maximum Entropy Discrimination
The Journal of Machine Learning Research
Hi-index | 0.00 |
If regression tasks are sampled from a distribution, then the expected error for a future task can be estimated by the average empirical errors on the data of a finite sample of tasks, uniformly over a class of regularizing or pre-processing transformations. The bound is dimension free, justifies optimization of the pre-processing feature-map and explains the circumstances under which learning-to-learn is preferable to single task learning.