From on-line to batch learning
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Neurocomputing: foundations of research
Learning with a slowly changing distribution
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Tracking Drifting Concepts By Minimizing Disagreements
Machine Learning - Special issue on computational learning theory
On the complexity of learning from drifting distributions
Information and Computation
Machine Learning - Special issue on context sensitivity and concept drift
The Complexity of Learning According to Two Models of a Drifting Environment
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Learning Changing Concepts by Exploiting the Structure of Change
Machine Learning
Learning Under Persistent Drift
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
Tracking the best linear predictor
The Journal of Machine Learning Research
Prediction, Learning, and Games
Prediction, Learning, and Games
Tracking the best hyperplane with a simple budget Perceptron
Machine Learning
Domain adaptation in regression
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Testing Symmetric Properties of Distributions
SIAM Journal on Computing
Domain adaptation and sample bias correction theory and algorithm for regression
Theoretical Computer Science
Hi-index | 0.00 |
We present a new analysis of the problem of learning with drifting distributions in the batch setting using the notion of discrepancy. We prove learning bounds based on the Rademacher complexity of the hypothesis set and the discrepancy of distributions both for a drifting PAC scenario and a tracking scenario. Our bounds are always tighter and in some cases substantially improve upon previous ones based on the L1 distance. We also present a generalization of the standard on-line to batch conversion to the drifting scenario in terms of the discrepancy and arbitrary convex combinations of hypotheses. We introduce a new algorithm exploiting these learning guarantees, which we show can be formulated as a simple QP. Finally, we report the results of preliminary experiments demonstrating the benefits of this algorithm.