Communications of the ACM
Computational limitations on learning from examples
Journal of the ACM (JACM)
Equivalence of models for polynomial learnability
Information and Computation
Fast learning of k-term DNF formulas with queries
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
Learning DNF formulae under classes of probability distributions
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning with a slowly changing distribution
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
On learning ring-sum-expansions
SIAM Journal on Computing
Efficient learning of typical finite automata from random walks
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Extension of the PAC framework to finite and countable Markov chains
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Neural ARX Models and PAC Learning
AI '00 Proceedings of the 13th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Hi-index | 0.00 |
In this paper we consider an approach to passive learning. In contrast to the classical PAC model we do not assume that the examples are independently drawn according to an underlying distribution, but that they are generated by a time-driven process. We define deterministic and probabilistic learning models of this sort and investigate the relationships between them and with other models. The fact that successive examples are related can often be used to gain additional information similar to the information gained by membership queries. We show that this can be used to design on-line prediction algorithms. In particular, we present efficient algorithms for exactly identifying Boolean threshold functions, 2-term RSE, and 2-term-DNF, when the examples are generated by a random walk on {0,1}n.