How to construct random functions
Journal of the ACM (JACM)
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Neural network design and the complexity of learning
Neural network design and the complexity of learning
On learning a union of half spaces
Journal of Complexity
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Robust trainability of single neurons
Journal of Computer and System Sciences
The hardness of approximation: gap location
Computational Complexity
The hardness of approximate optima in lattices, codes, and systems of linear equations
Journal of Computer and System Sciences - Special issue: papers from the 32nd and 34th annual symposia on foundations of computer science, Oct. 2–4, 1991 and Nov. 3–5, 1993
On the infeasibility of training neural networks with small squared errors
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
On the hardness of approximating minimization problems
Journal of the ACM (JACM)
On the Hardness of Approximating Max k-Cut and Its Dual
On the Hardness of Approximating Max k-Cut and Its Dual
Efficient agnostic learning of neural networks with bounded fan-in
IEEE Transactions on Information Theory - Part 2
The computational intractability of training sigmoidal neural networks
IEEE Transactions on Information Theory
On the complexity of training neural networks with continuous activation functions
IEEE Transactions on Neural Networks
Structural Complexity and Neural Networks
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
On Approximate Learning by Multi-layered Feedforward Circuits
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Bounds for the Minimum Disagreement Problem with Applications to Learning Theory
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Hi-index | 0.00 |
We consider the problem of efficiently learning in two-layer neural networks. We show that it is NP-hard to find a linear threshold network of a fixed size that approximately minimizes the proportion of misclassified examples in a training set, even if there is a network that correctly classifies all of the training examples. In particular, for a training set that is correctly classified by some two-layer linear threshold network with k hidden units, it is NP-hard to find such a network that makes mistakes on a proportion smaller than c=k3 of the examples, for some constant c. We prove a similar result for the problem of approximately minimizing the quadratic loss of a two-layer network with a sigmoid output unit.