A lower bound for agnostically learning disjunctions

  • Authors:
  • Adam R. Klivans;Alexander A. Sherstov

  • Affiliations:
  • The University of Texas at Austin, Department of Computer Sciences, Austin, TX;The University of Texas at Austin, Department of Computer Sciences, Austin, TX

  • Venue:
  • COLT'07 Proceedings of the 20th annual conference on Learning theory
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We prove that the concept class of disjunctions cannot be pointwise approximated by linear combinations of any small set of arbitrary real-valued functions. That is, suppose there exist functions φ1,...,φr: {-1,1}n → R with the property that every disjunction f on n variables has ∥f -Σi=1 r αiφi∥∞ ≤ 1/3 for some reals α1,..., αr. We prove that then r ≥ 2Ω(√n). This lower bound is tight. We prove an incomparable lower bound for the concept class of linear-size DNF formulas. For the concept class of majority functions, we obtain a lower bound of Ω(2n/n), which almost meets the trivial upper bound of 2n for any concept class. These lower bounds substantially strengthen and generalize the polynomial approximation lower bounds of Paturi and show that the regression-based agnostic learning algorithm of Kalai et al. is optimal. Our techniques involve a careful application of results in communication complexity due to Razborov and Buhrman et al.