New Results for Learning Noisy Parities and Halfspaces

  • Authors:
  • Vitaly Feldman;Parikshit Gopalan;Subhash Khot;Ashok Kumar Ponnuswami

  • Affiliations:
  • Harvard University, USA;Georgia Tech., USA;Georgia Tech., USA;Georgia Tech., USA

  • Venue:
  • FOCS '06 Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. [5], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of k-juntas reduces to learning noisy parities of k variables. These reductions work even in the presence of random classification noise in the original DNF or junta.We then consider the problem of learning halfspaces over \mathbb{Q}^nwith adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. We prove an essentially optimal hardness factor of 2-\in, improving the factor of \frac{{85}} {{84}} - \in due to Bshouty and Burroughs [8]. Finally, we show that majorities of halfspaces are hard to PAC-learn using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem.