Communications of the ACM
A hard-core predicate for all one-way functions
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Learning DNF under the uniform distribution in quasi-polynomial time
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Learning decision trees using the Fourier spectrum
SIAM Journal on Computing
Efficient noise-tolerant learning from statistical queries
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Weakly learning DNF and characterizing statistical query learning using Fourier analysis
STOC '94 Proceedings of the twenty-sixth annual ACM symposium on Theory of computing
Lower bounds for linear satisfiability problems
Proceedings of the sixth annual ACM-SIAM symposium on Discrete algorithms
Machine Learning
Secure Human Identification Protocols
ASIACRYPT '01 Proceedings of the 7th International Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology
Noise-tolerant learning, the parity problem, and the statistical query model
Journal of the ACM (JACM)
Learning functions of k relevant variables
Journal of Computer and System Sciences - Special issue: STOC 2003
Boosting in the presence of noise
Journal of Computer and System Sciences - Special issue: Learning theory 2003
Near-Optimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions
FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
The complexity of properly learning simple concept classes
Journal of Computer and System Sciences
APPROX '08 / RANDOM '08 Proceedings of the 11th international workshop, APPROX 2008, and 12th international workshop, RANDOM 2008 on Approximation, Randomization and Combinatorial Optimization: Algorithms and Techniques
A Geometric Approach to Lower Bounds for Approximate Near-Neighbor Search and Partial Match
FOCS '08 Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer Science
Exact learning of random DNF over the uniform distribution
Proceedings of the forty-first annual ACM symposium on Theory of computing
Public-key cryptosystems from the worst-case shortest vector problem: extended abstract
Proceedings of the forty-first annual ACM symposium on Theory of computing
On lattices, learning with errors, random linear codes, and cryptography
Journal of the ACM (JACM)
On Agnostic Learning of Parities, Monomials, and Halfspaces
SIAM Journal on Computing
Efficient cryptographic protocols based on the hardness of learning parity with noise
Cryptography and Coding'07 Proceedings of the 11th IMA international conference on Cryptography and coding
Learning parities in the mistake-bound model
Information Processing Letters
Lower Bounds on Near Neighbor Search via Metric Expansion
FOCS '10 Proceedings of the 2010 IEEE 51st Annual Symposium on Foundations of Computer Science
APPROX'05/RANDOM'05 Proceedings of the 8th international workshop on Approximation, Randomization and Combinatorial Optimization Problems, and Proceedings of the 9th international conference on Randamization and Computation: algorithms and techniques
Solving the learning parity with noise's open question
Information Processing Letters
Hi-index | 0.00 |
We consider the problem of learning sparse parities in the presence of noise. For learning parities on r out of n variables, we give an algorithm that runs in time poly (log 1/δ, 1/1-2η)n(1+(2η)2+o(1))r/2 and uses only r log(n/δ)ω(1)/(1-2η)2 samples in the random noise setting under the uniform distribution, where η is the noise rate and δ is the confidence parameter. From previously known results this algorithm also works for adversarial noise and generalizes to arbitrary distributions. Even though efficient algorithms for learning sparse parities in the presence of noise would have major implications to learning other hypothesis classes, our work is the first to give a bound better than the brute-force O(nr). As a consequence, we obtain the first nontrivial bound for learning r-juntas in the presence of noise, and also a small improvement in the complexity of learning DNF, under the uniform distribution.