Robust trainability of single neurons

  • Authors:
  • Klaus-Uwe Höffgen;Hans Ulrich Simon

  • Affiliations:
  • Lehrstuhl Informatik II, Universität Dortmund, 4600 Dortmund 50;Lehrstuhl Informatik II, Universität Dortmund, 4600 Dortmund 50

  • Venue:
  • COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
  • Year:
  • 1992

Quantified Score

Hi-index 0.01

Visualization

Abstract

We investigate the problem of learning concepts by presenting labeled and randomly chosen training–examples to single neurons. It is well-known that linear halfspaces are learnable by the method of linear programming. The corresponding (Mc-Culloch-Pitts) neurons are therefore efficiently trainable to learn an unknown halfspace from examples. We want to analyze how fast the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that a neuron cannot efficently find its probably almost optimal adjustment (unless RP = NP). If the weights and the threshold of the neuron have a fixed constant bound on their coding length, the situation is even worse: There is in general no polynomial time training method which bounds the resulting prediction error of the neuron by k.opt for a fixed constant k (unless RP = NP). Other variants of learning more complex concepts than halfspaces by single neurons are also investigated. We show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject-rate is efficiently possible (unless RP = NP).