Inductive inference of approximations
Information and Control
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
One-sided error probabilistic inductive inference and reliable frequency identification
Information and Computation
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
A machine learning-based approach to load balancing in computer networks
Cybernetics and Systems - Special issue: Eurocast 1991 international workshop on computer aided systems theory
Tracking Drifting Concepts By Minimizing Disagreements
Machine Learning - Special issue on computational learning theory
Approximate inference and scientific method
Information and Computation
Dynamically adjusting categories to accommodate changing contexts
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Learning changing concepts by exploiting the structure of change
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Concept Formation and Knowledge Revision
Concept Formation and Knowledge Revision
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Robust Learning with Infinite Additional Information
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
Learning Under Persistent Drift
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
On learning to coordinate: random bits help, insightful normal forms, and competency isomorphisms
Journal of Computer and System Sciences - Special issue: Learning theory 2003
Learning a subclass of regular patterns in polynomial time
Theoretical Computer Science - Algorithmic learning theory
Parallelism Increases Iterative Learning Power
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Parallelism increases iterative learning power
Theoretical Computer Science
An ensemble approach for incremental learning in nonstationary environments
MCS'07 Proceedings of the 7th international conference on Multiple classifier systems
Temporal evolution and local patterns
LPD'04 Proceedings of the 2004 international conference on Local Pattern Detection
Recentness biased learning for time series forecasting
Information Sciences: an International Journal
Hi-index | 0.00 |
Concept drift means that the concept about which data is obtained may shift from time to time, each time after some minimum permanence. Except for this minimum permanence, the concept shifts may not have to satisfy any further requirements and may occur infinitely often. Within this work is studied to what extent it is still possible to predict or learn values for a data sequence produced by drifting concepts. Various ways to measure the quality of such predictions, including martingale betting strategies and density and frequency of correctness, are introduced and compared with one another. For each of these measures of prediction quality, for some interesting concrete classes, (nearly) optimal bounds on permanence for attaining learnability are established. The concrete classes, from which the drifting concepts are selected, include regular languages accepted by finite automata of bounded size, polynomials of bounded degree, and sequences defined by recurrence relations of bounded size. Some important, restricted cases of drifts are also studied, for example, the case where the intervals of permanence are computable. In the case where the concepts shift only among finitely many possibilities from certain infinite, arguably practical classes, the learning algorithms can be considerably improved. Copyright 2001 Elsevier Science B.V. All rights reserved.