The transition to perfect generalization in perceptrons

  • Authors:
  • Eric B. Baum;Yuh-Dauh Lyuu

  • Affiliations:
  • NEC Research Institute, Princeton, NJ 08540 USA;NEC Research Institute, Princeton, NJ 08540 USA

  • Venue:
  • Neural Computation
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

Several recent papers (Gardner and Derrida 1989; Gyrgyi 1990; Sompolinsky et al. 1990) have found, using methods of statistical physics, that a transition to perfect generalization occurs in training a simple perceptron whose weights can only take values 1. We give a rigorous proof of such a phenomena. That is, we show, for = 2.0821, that if at least n examples are drawn from the uniform distribution on {1, 1}n and classified according to a target perceptron wt {1, 1}n as positive or negative according to whether wtx is nonnegative or negative, then the probability is 2(n) that there is any other such perceptron consistent with the examples. Numerical results indicate further that perfect generalization holds for as low as 1.5.