On the hardness of domain adaptation and the utility of unlabeled target samples

  • Authors:
  • Shai Ben-David;Ruth Urner

  • Affiliations:
  • School of Computer Science, University of Waterloo, Waterloo, ON, Canada;School of Computer Science, University of Waterloo, Waterloo, ON, Canada

  • Venue:
  • ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Domain Adaptation problem in machine learning occurs when the test and training data generating distributions differ. We consider the covariate shift setting, where the labeling function is the same in both domains. Many works have proposed algorithms for Domain Adaptation in this setting. However, there are only very few generalization guarantees for these algorithms. We show that, without strong prior knowledge about the training task, such guarantees are actually unachievable (unless the training samples are prohibitively large). The contributions of this paper are two-fold: On the one hand we show that Domain Adaptation in this setup is hard. Even under very strong assumptions about the relationship between source and target distribution and, on top of that, a realizability assumption for the target task with respect to a small class, the required total sample sizes grow unboundedly with the domain size. On the other hand, we present settings where we achieve almost matching upper bounds on the sum of the sizes of the two samples. Moreover, the (necessarily large) samples can be mostly unlabeled (target) samples, which are often much cheaper to obtain than labels. The size of the labeled (source) sample shrinks back to standard dependence on the VC-dimension of the concept class. This implies that unlabeled target-generated data is provably beneficial for DA learning.