A theory of transfer learning with applications to active learning

  • Authors:
  • Liu Yang;Steve Hanneke;Jaime Carbonell

  • Affiliations:
  • Machine Learning Department, Carnegie Mellon University, Pittsburgh, USA 15213;Department of Statistics, Carnegie Mellon University, Pittsburgh, USA 15213;Language Technologies Institute, Carnegie Mellon University, Pittsburgh, USA 15213

  • Venue:
  • Machine Learning
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We explore a transfer learning setting, in which a finite sequence of target concepts are sampled independently with an unknown distribution from a known family. We study the total number of labeled examples required to learn all targets to an arbitrary specified expected accuracy, focusing on the asymptotics in the number of tasks and the desired accuracy. Our primary interest is formally understanding the fundamental benefits of transfer learning, compared to learning each target independently from the others. Our approach to the transfer problem is general, in the sense that it can be used with a variety of learning protocols. As a particularly interesting application, we study in detail the benefits of transfer for self-verifying active learning; in this setting, we find that the number of labeled examples required for learning with transfer is often significantly smaller than that required for learning each target independently.