Every Linear Threshold Function has a Low-Weight Approximator

  • Authors:
  • Rocco A. Servedio

  • Affiliations:
  • Department of Computer Science, Columbia University, New York, USA 10027-7003

  • Venue:
  • Computational Complexity
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Given any linear threshold function f on n Boolean variables, we construct a linear threshold function g which disagrees with f on at most an 驴 fraction of inputs and has integer weights each of magnitude at most $${\sqrt{n}\cdot}2^{{\tilde{O}}(1/ \epsilon^{2})}$$ We show that the construction is optimal in terms of its dependence on n by proving a lower bound of $$\Omega(\sqrt{n})$$ on the weights required to approximate a particular linear threshold function. We give two applications. The first is a deterministic algorithm for approximately counting the fraction of satisfying assignments to an instance of the zero-one knapsack problem to within an additive 卤 驴. The algorithm runs in time polynomial in n (but exponential in $${1}/{\epsilon^{2}}$$ ). In our second application, we show that any linear threshold function f is specified to within error 驴 by estimates of its Chow parameters (degree 0 and 1 Fourier coefficients) which are accurate to within an additive $$\pm{1}/({n}\cdot 2^{{\tilde{O}}(1/ \epsilon^{2})})$$ . This is the first such accuracy bound which is inverse polynomial in n, and gives the first polynomial bound (in terms of n) on the number of examples required for learning linear threshold functions in the "restricted focus of attention" framework.