Estimating a Boolean Perceptron from Its Average Satisfying Assignment: A Bound on the Precision Required

  • Authors:
  • Paul W. Goldberg

  • Affiliations:
  • -

  • Venue:
  • COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

A boolean perceptron is a linear threshold function over the discrete boolean domain {0, 1}n. That is, it maps any binary vector to 0 or 1 depending on whether the vector's components satisfy some linear inequality. In 1961, Chow [9] showed that any boolean perceptron is determined by the average or "center of gravity" of its "true" vectors (those that are mapped to 1). Moreover, this average distinguishes the function from any other boolean function, not just other boolean perceptrons. We address an associated statisticalquestion of whether an empirical estimate of this average is likely to provide a good approximation to the perceptron. In this paper we show that an estimate that is accurate to within additive error (Ɛ/n)O(log(1/Ɛ)) determines a boolean perceptron that is accurate to within error Ɛ (the fraction of misclassified vectors). This provides a mildly super-polynomial bound on the sample complexity of learning boolean perceptrons in the "restricted focus of attention" setting. In the process we also find some interesting geometrical properties of the vertices of the unit hypercube.