Communications of the ACM
From on-line to batch learning
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Equivalence of models for polynomial learnability
Information and Computation
Learning with a slowly changing distribution
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
On learning ring-sum-expansions
SIAM Journal on Computing
Efficient learning of typical finite automata from random walks
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Estimation and Approximation Bounds for Gradient-Based Reinforcement Learning
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Minimum complexity regression estimation with weakly dependent observations
IEEE Transactions on Information Theory - Part 2
Separating Models of Learning from Correlated and Uncorrelated Data
The Journal of Machine Learning Research
On exact learning from random walk
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Separating models of learning from correlated and uncorrelated data
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Hi-index | 0.00 |
In this paper we consider an approach to passive learning. In contrast to the classical PAC model we do not assume that the examples are independently drawn according to an underlying distribution, but that they are generated by a time-driven process. We define deterministic and probabilistic learning models of this sort and investigate the relationships between them and with other models. The fact that successive examples are related can often be used to gain additional information similar to the information gained by membership queries. We show how this can be used to design on-line prediction algorithms. In particular, we present efficient algorithms for exactly identifying Boolean threshold functions and 2-term RSE, and for learning 2-term-DNF, when the examples are generated by a random walk on {0, 1}n.