On Approximate Learning by Multi-layered Feedforward Circuits

  • Authors:
  • Bhaskar DasGupta;Barbara Hammer

  • Affiliations:
  • -;-

  • Venue:
  • ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions. First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3,5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n3, for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and Ɛ-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function. Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.