Flexible latent variable models for multi-task learning

  • Authors:
  • Jian Zhang;Zoubin Ghahramani;Yiming Yang

  • Affiliations:
  • Department of Statistics, Purdue University, West Lafayette, USA 47907;Department of Engineering, University of Cambridge, Cambridge, UK CB2 1PZ and School of Computer Science, Carnegie Mellon University, Pittsburgh, USA 15213;School of Computer Science, Carnegie Mellon University, Pittsburgh, USA 15213

  • Venue:
  • Machine Learning
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Given multiple prediction problems such as regression or classification, we are interested in a joint inference framework that can effectively share information between tasks to improve the prediction accuracy, especially when the number of training examples per problem is small. In this paper we propose a probabilistic framework which can support a set of latent variable models for different multi-task learning scenarios. We show that the framework is a generalization of standard learning methods for single prediction problems and it can effectively model the shared structure among different prediction tasks. Furthermore, we present efficient algorithms for the empirical Bayes method as well as point estimation. Our experiments on both simulated datasets and real world classification datasets show the effectiveness of the proposed models in two evaluation settings: a standard multi-task learning setting and a transfer learning setting.