Hardness Results for Neural Network Approximation Problems

  • Authors:
  • Peter L. Bartlett;Shai Ben-David

  • Affiliations:
  • -;-

  • Venue:
  • EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of efficiently learning in two-layer neural networks. We show that it is NP-hard to find a linear threshold network of a fixed size that approximately minimizes the proportion of misclassified examples in a training set, even if there is a network that correctly classifies all of the training examples. In particular, for a training set that is correctly classified by some two-layer linear threshold network with k hidden units, it is NP-hard to find such a network that makes mistakes on a proportion smaller than c=k3 of the examples, for some constant c. We prove a similar result for the problem of approximately minimizing the quadratic loss of a two-layer network with a sigmoid output unit.