Inductive inference of approximations
Information and Control
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Tracking Drifting Concepts By Minimizing Disagreements
Machine Learning - Special issue on computational learning theory
Approximate inference and scientific method
Information and Computation
Learning changing concepts by exploiting the structure of change
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Concept Formation and Knowledge Revision
Concept Formation and Knowledge Revision
Robust Learning with Infinite Additional Information
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
Learning Under Persistent Drift
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
Exploiting concept clumping for efficient incremental e-mail categorization
ADMA'10 Proceedings of the 6th international conference on Advanced data mining and applications - Volume Part II
Probabilistic user modeling in the presence of drifting concepts
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part I
Hi-index | 0.00 |
Concept drift means that the concept about which data is obtained may shift from time to time, each time after some minimum permanence. Except for this minimum permanence, the concept shifts may not have to satisfy any further requirements and may occur infinitely often. Within this work is studied to what extent it is still possible to predict or learn values for a data sequence produced by drifting concepts. Various ways to measure the quality of such predictions, including martingale betting strategies and density and frequency of correctness, are introduced and compared with one another. For each of these measures of prediction quality, for some interesting concrete classes, usefully established are (nearly) optimal bounds on permanence for attaining learnability. The concrete classes, from which the drifting concepts are selected, include regular languages accepted by finite automata of bounded size, polynomials of bounded degree, and exponentially growing sequences defined by recurrence relations of bounded size. Some important, restricted cases of drifts are also studied, e.g., the case where the intervals of permanence are computable. In the case where the concepts shift only among finitely many possibilities from certain infinite, arguably practical classes, the learning algorithms can be considerably improved.