Multitask Learning

  • Authors:
  • Rich Caruana

  • Affiliations:
  • School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213. E-mail: caruana@cs.cmu.edu

  • Venue:
  • Machine Learning - Special issue on inductive transfer
  • Year:
  • 1997

Quantified Score

Hi-index 0.01

Visualization

Abstract

Multitask Learning is an approach to inductive transfer that improvesgeneralization by using the domain information contained in thetraining signals of related tasks as an inductive bias. Itdoes this by learning tasks in parallel while using a sharedrepresentation; what is learned for each task can help other tasks belearned better. This paper reviews prior work on MTL, presents newevidence that MTL in backprop nets discovers task relatedness withoutthe need of supervisory signals, and presents new results for MTLwith k-nearest neighbor and kernel regression. In this paper wedemonstrate multitask learning in three domains. We explain howmultitask learning works, and show that there are many opportunitiesfor multitask learning in real domains. We present an algorithm andresults for multitask learning with case-based methods like k-nearestneighbor and kernel regression, and sketch an algorithm for multitasklearning in decision trees. Because multitask learning works, can beapplied to many different kinds of domains, and can be used withdifferent learning algorithms, we conjecture there will be manyopportunities for its use on real-world problems.