Epsilon-nets and simplex range queries
SCG '86 Proceedings of the second annual symposium on Computational geometry
Discrete Applied Mathematics - Special issue: Vapnik-Chervonenkis dimension
Detecting change in data streams
VLDB '04 Proceedings of the Thirtieth international conference on Very large data bases - Volume 30
Covariate Shift Adaptation by Importance Weighted Cross Validation
The Journal of Machine Learning Research
Sample Selection Bias Correction Theory
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
A theory of learning from different domains
Machine Learning
Domain adaptation in regression
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Hi-index | 0.00 |
The Domain Adaptation problem in machine learning occurs when the test and training data generating distributions differ. We consider the covariate shift setting, where the labeling function is the same in both domains. Many works have proposed algorithms for Domain Adaptation in this setting. However, there are only very few generalization guarantees for these algorithms. We show that, without strong prior knowledge about the training task, such guarantees are actually unachievable (unless the training samples are prohibitively large). The contributions of this paper are two-fold: On the one hand we show that Domain Adaptation in this setup is hard. Even under very strong assumptions about the relationship between source and target distribution and, on top of that, a realizability assumption for the target task with respect to a small class, the required total sample sizes grow unboundedly with the domain size. On the other hand, we present settings where we achieve almost matching upper bounds on the sum of the sizes of the two samples. Moreover, the (necessarily large) samples can be mostly unlabeled (target) samples, which are often much cheaper to obtain than labels. The size of the labeled (source) sample shrinks back to standard dependence on the VC-dimension of the concept class. This implies that unlabeled target-generated data is provably beneficial for DA learning.