Machine Learning - Special issue on inductive transfer
Learning to learn
Hierarchically Classifying Documents Using Very Few Words
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Empirical Bayes for Learning to Learn
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
A model of inductive bias learning
Journal of Artificial Intelligence Research
Automatic Choice of Control Measurements
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
Multi-task feature learning via efficient l2, 1-norm minimization
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Discriminative factored prior models for personalized content-based recommendation
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Relevant subtask learning by constrained mixture models
Intelligent Data Analysis
Drosophila Gene Expression Pattern Annotation through Multi-Instance Multi-Label Learning
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Learning output kernels for multi-task problems
Neurocomputing
Geometry preserving multi-task metric learning
Machine Learning
Hi-index | 0.00 |
Given multiple prediction problems such as regression or classification, we are interested in a joint inference framework that can effectively share information between tasks to improve the prediction accuracy, especially when the number of training examples per problem is small. In this paper we propose a probabilistic framework which can support a set of latent variable models for different multi-task learning scenarios. We show that the framework is a generalization of standard learning methods for single prediction problems and it can effectively model the shared structure among different prediction tasks. Furthermore, we present efficient algorithms for the empirical Bayes method as well as point estimation. Our experiments on both simulated datasets and real world classification datasets show the effectiveness of the proposed models in two evaluation settings: a standard multi-task learning setting and a transfer learning setting.