Machine Learning - Special issue on inductive transfer
Content-boosted collaborative filtering for improved recommendations
Eighteenth national conference on Artificial intelligence
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
Unifying collaborative and content-based filtering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Improving SVM accuracy by training on auxiliary data sources
ICML '04 Proceedings of the twenty-first international conference on Machine learning
IEEE Transactions on Knowledge and Data Engineering
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Logistic regression with an auxiliary data source
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Multi-Task Learning for Classification with Dirichlet Process Priors
The Journal of Machine Learning Research
Learning a meta-level prior for feature relevance from multiple related tasks
Proceedings of the 24th international conference on Machine learning
Multi-task learning for HIV therapy screening
Proceedings of the 25th international conference on Machine learning
Learning from Relevant Tasks Only
ECML '07 Proceedings of the 18th European conference on Machine Learning
An Algorithm for Transfer Learning in a Heterogeneous Environment
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
An Improved Multi-task Learning Approach with Applications in Medical Diagnosis
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Kernel-Based Inductive Transfer
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Convex multi-task feature learning
Machine Learning
Flexible latent variable models for multi-task learning
Machine Learning
Empirical Asymmetric Selective Transfer in Multi-objective Decision Trees
DS '08 Proceedings of the 11th International Conference on Discovery Science
Semisupervised Multitask Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Coupling semi-supervised learning of categories and relations
SemiSupLearn '09 Proceedings of the NAACL HLT 2009 Workshop on Semi-Supervised Learning for Natural Language Processing
Hi-index | 0.00 |
We introduce relevant subtask learning, a new learning problem which is a variant of multi-task learning. The goal is to build a classifier for a task-of-interest for which we have too few training samples. We additionally have "supplementary data'' collected from other tasks, but it is uncertain which of these other samples are relevant, that is, which samples are classified in the same way as in the task-of-interest. The research problem is how to use the "supplementary data" from the other tasks to improve the classifier in the task-of-interest. We show how to solve the problem, and demonstrate the solution with logistic regression classifiers. The key idea is to model all tasks as mixtures of relevant and irrelevant samples, and model the irrelevant part with a sufficiently flexible model such that it does not distort the model of relevant data. We give two learning algorithms for the method - a simple maximum likelihood optimization algorithm and a more advanced variational Bayes inference algorithm; in both cases we show that the method works better than a comparable multi-task learning model and naive methods.