Communications of the ACM
How to construct random functions
Journal of the ACM (JACM)
Learning DNF under the uniform distribution in quasi-polynomial time
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Cryptographic hardness of distribution-specific learning
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Constant depth circuits, Fourier transform, and learnability
Journal of the ACM (JACM)
Weakly learning DNF and characterizing statistical query learning using Fourier analysis
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
On the Fourier spectrum of monotone functions
Journal of the ACM (JACM)
An efficient membership-query algorithm for learning DNF with respect to the uniform distribution
Journal of Computer and System Sciences
More efficient PAC-learning of DNF with membership queries under the uniform distribution
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Extension of the PAC framework to finite and countable Markov chains
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
A Pseudorandom Generator from any One-way Function
SIAM Journal on Computing
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Foundations of Cryptography: Basic Tools
Foundations of Cryptography: Basic Tools
Exploiting random walks for learning
Information and Computation
Learning DNF from Random Walks
FOCS '03 Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science
A Markovian extension of Valiant's learning model
SFCS '90 Proceedings of the 31st Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
We consider a natural framework of learning from correlated data, in which successive examples used for learning are generated according to a random walk over the space of possible examples. Previous research has suggested that the Random Walk model is more powerful than comparable standard models of learning from independent examples, by exhibiting learning algorithms in the Random Walk framework that have no known counterparts in the standard model. We give strong evidence that the Random Walk model is indeed more powerful than the standard model, by showing that if any cryptographic one-way function exists (a universally held belief in public key cryptography), then there is a class of functions that can be learned efficiently in the Random Walk setting but not in the standard setting where all examples are independent.