Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Machine Learning - Special issue on inductive transfer
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
A theoretical framework for learning from a pool of disparate data sources
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
A model of inductive bias learning
Journal of Artificial Intelligence Research
Multi-task Feature Selection Using the Multiple Inclusion Criterion (MIC)
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Transfer Learning for Reinforcement Learning Domains: A Survey
The Journal of Machine Learning Research
Tree ensembles for predicting structured outputs
Pattern Recognition
Efficient online learning for multitask feature selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Multi-task learning with one-class SVM
Neurocomputing
Hi-index | 0.00 |
The approach of learning multiple "related" tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow "algorithmically related", in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underlie these tasks.We provide a formal framework for this notion of task relatedness, which captures a sub-domain of the wide scope of issues in which one may apply a multiple task learning approach. Our notion of task similarity is relevant to a variety of real life multitask learning scenarios and allows the formal derivation of generalization bounds that are strictly stronger than the previously known bounds for both the learning-to-learn and the multitask learning scenarios. We give precise conditions under which our bounds guarantee generalization on the basis of smaller sample sizes than the standard single-task approach.