Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Unconditional lower bounds for learning intersections of halfspaces
Machine Learning
Approximating maximum satisfiable subsystems of linear equations of bounded width
Information Processing Letters
On hardness of learning intersection of two halfspaces
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Agnostically learning decision trees
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
On agnostic boosting and parity learning
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Separating Models of Learning with Faulty Teachers
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
A tamper-proof and lightweight authentication scheme
Pervasive and Mobile Computing
Cryptographic hardness for learning intersections of halfspaces
Journal of Computer and System Sciences
Separating models of learning with faulty teachers
Theoretical Computer Science
Testing Fourier Dimensionality and Sparsity
ICALP '09 Proceedings of the 36th International Colloquium on Automata, Languages and Programming: Part I
Learning Halfspaces with Malicious Noise
ICALP '09 Proceedings of the 36th International Colloquium on Automata, Languages and Programming: Part I
Hardness of Solving Sparse Overdetermined Linear Systems: A 3-Query PCP over Integers
ACM Transactions on Computation Theory (TOCT)
Extracting Computational Entropy and Learning Noisy Linear Functions
COCOON '09 Proceedings of the 15th Annual International Conference on Computing and Combinatorics
Learning with annotation noise
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Exploiting Product Distributions to Identify Relevant Variables of Correlation Immune Functions
The Journal of Machine Learning Research
Learning Halfspaces with Malicious Noise
The Journal of Machine Learning Research
Local list-decoding and testing of random linear codes from high error
Proceedings of the forty-second ACM symposium on Theory of computing
Differentially private data release through multidimensional partitioning
SDM'10 Proceedings of the 7th VLDB conference on Secure data management
SIAM Journal on Computing
Testing by implicit learning: a brief survey
Property testing
Some recent results on local testing of sparse linear codes
Property testing
Testing by implicit learning: a brief survey
Property testing
Some recent results on local testing of sparse linear codes
Property testing
New algorithms for learning in presence of errors
ICALP'11 Proceedings of the 38th international colloquim conference on Automata, languages and programming - Volume Part I
SIAM Journal on Computing
Testing Fourier Dimensionality and Sparsity
SIAM Journal on Computing
Cryptography from learning parity with noise
SOFSEM'12 Proceedings of the 38th international conference on Current Trends in Theory and Practice of Computer Science
Learning Kernel-Based Halfspaces with the 0-1 Loss
SIAM Journal on Computing
On the list decodability of random linear codes with large error rates
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Hi-index | 0.00 |
We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. [5], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta.We then consider the problem of learning halfspaces over \mathbb{Q}^nwith adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. We prove an essentially optimal hardness factor of 2-\in, improving the factor of \frac{{85}} {{84}} - \in due to Bshouty and Burroughs [8]. Finally, we show that majorities of halfspaces are hard to PAC-learn using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem.