On learning noisy threshold functions with finite precision weights

  • Authors:
  • Ronny Meir;Jose F. Fontanari

  • Affiliations:
  • Department of Electrical Engineering, Technion, Haifa 32000, Israel;IFQSC - DFCM, Universidade de Sao Paulo, 13560 Sao Carlos SP, Brazil

  • Venue:
  • COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address the issue of the precision required by an N-input threshold element in order to implement a linearly separable mapping. In distinction with previous work we require only the ability to correctly implement the mapping of P randomly chosen training examples, as opposed to the complete boolean mapping. Our results are obtained within the statistical mechanics approach and are thus average case results as opposed to the worst case analyses in the computational learning theory literature. We show that as long as the fraction P/N is finite, then with probability close to 1 as N → ∞ a finite number of bits suffice to implement the mapping. This should be compared to the worst case analysis which requires O(N log N) bits. We also calculate the ability of the constrained network to predict novel examples and compare their predictions to those of an unconstrained network. Finally, we address the issue of the performance of the finite-precision network in the face of noisy training examples.