Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
On specifying Boolean functions by labelled examples
Discrete Applied Mathematics
On Restricted-Focus-of-Attention Learnability of Boolean Functions
Machine Learning - Special issue on the ninth annual conference on computational theory (COLT '96)
Learning with restricted focus of attention
Journal of Computer and System Sciences
Learning fixed-dimension linear thresholds from fragmented data
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Chow Parameters in Threshold Logic
Journal of the ACM (JACM)
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Decision lists and related Boolean functions
Theoretical Computer Science
Machine Learning
Learnability with Restricted Focus of Attention guarantees Noise-Tolerance
AII '94 Proceedings of the 4th International Workshop on Analogical and Inductive Inference: Algorithmic Learning Theory
Threshold Gate Approximations Based on Chow Parameters
IEEE Transactions on Computers
On the characterization of threshold functions
FOCS '61 Proceedings of the 2nd Annual Symposium on Switching Circuit Theory and Logical Design (SWCT 1961)
Hi-index | 0.00 |
A boolean perceptron is a linear threshold function over the discrete boolean domain {0, 1}n. That is, it maps any binary vector to 0 or 1 depending on whether the vector's components satisfy some linear inequality. In 1961, Chow [9] showed that any boolean perceptron is determined by the average or "center of gravity" of its "true" vectors (those that are mapped to 1). Moreover, this average distinguishes the function from any other boolean function, not just other boolean perceptrons. We address an associated statisticalquestion of whether an empirical estimate of this average is likely to provide a good approximation to the perceptron. In this paper we show that an estimate that is accurate to within additive error (Ɛ/n)O(log(1/Ɛ)) determines a boolean perceptron that is accurate to within error Ɛ (the fraction of misclassified vectors). This provides a mildly super-polynomial bound on the sample complexity of learning boolean perceptrons in the "restricted focus of attention" setting. In the process we also find some interesting geometrical properties of the vertices of the unit hypercube.