Agnostic Learning of Monomials by Halfspaces Is Hard

  • Authors:
  • Vitaly Feldman;Venkatesan Guruswami;Prasad Raghavendra;Yi Wu

  • Affiliations:
  • -;-;-;-

  • Venue:
  • FOCS '09 Proceedings of the 2009 50th Annual IEEE Symposium on Foundations of Computer Science
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We prove the following strong hardness result for learning: Given a distribution on labeled examples from the hypercube such that there exists a monomial (or conjunction) consistent with (1-ε)-fraction of the examples, it is NP-hard to find a halfspace that is correct on ( 1/2 +ε)-fraction of the examples, for arbitrary constant ε 0. In learning theory terms, weak agnostic learning of monomials by halfspaces is NP-hard. This hardness result bridges between and subsumes two previous results which showed similar hardness results for the proper learning of monomials and halfspaces. As immediate corollaries of our result, we give the first optimal hardness results for weak agnostic learning of decision lists and majorities. Our techniques are quite different from previous hardness proofs for learning. We use an invariance principle and sparse approximation of halfspaces from recent work on fooling halfspaces to give a new natural list decoding of a halfspace in the context of dictatorship tests/label cover reductions. In addition, unlike previous invariance principle based proofs which are only known to give Unique Games hardness, we give a reduction from a smooth version of Label Cover that is known to be NP-hard.