Online Learning versus Offline Learning
Machine Learning
Exact learning of DNF formulas using DNF hypotheses
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Exploring Learnability between Exact and PAC
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Exact learning of DNF formulas using DNF hypotheses
Journal of Computer and System Sciences - Special issue on COLT 2002
Exploring learnability between exact and PAC
Journal of Computer and System Sciences - Special issue on COLT 2002
Knows what it knows: a framework for self-aware learning
Proceedings of the 25th international conference on Machine learning
Algorithms and theory of computation handbook
Learning parities in the mistake-bound model
Information Processing Letters
Hi-index | 0.01 |
Two of the most commonly used models in computational learning theory are the distribution-free model in which examples are chosen from a fixed but arbitrary distribution, and the absolute mistake-bound model in which examples are presented in an arbitrary order. Over the Boolean domain $\{0,1\}^n$, it is known that if the learner is allowed unlimited computational resources then any concept class learnable in one model is also learnable in the other. In addition, any polynomial-time learning algorithm for a concept class in the mistake-bound model can be transformed into one that learns the class in the distribution-free model. This paper shows that if one-way functions exist, then the mistake-bound model is strictly harder than the distribution-free model for polynomial-time learning. Specifically, given a one-way function, it is shown how to create a concept class over $\{0,1\}^n$ that is learnable in polynomial time in the distribution-free model, but not in the absolute mistake-bound model. In addition, the concept class remains hard to learn in the mistake-bound model even if the learner is allowed a polynomial number of membership queries. The concepts considered are based upon the Goldreich, Goldwasser, and Micali random function construction [Goldreich, Goldwasser, and Micali, Journal ACM, 33 (1986), pp. 792--807] and involve creating the following new cryptographic object: an exponentially long sequence of strings $\sigma_1, \sigma_2, \ldots, \sigma_r$ over $\{0,1\}^n$ that is hard to compute in one direction (given $\sigma_i$ one cannot compute $\sigma_j$ for $j i$ one can compute $\sigma_j$, even if $j$ is exponentially larger than $i$). Similar sequences considered previously [Blum, Blum, and Shub, SIAM J. Comput., 15 (1986), pp. 364--383], [Blum and Micali, SIAM J. Comput., 13 (1984), pp. 850--863] did not allow random-access jumps forward without knowledge of a seed allowing one to compute backwards as well.