Parallel computation with threshold functions
Journal of Computer and System Sciences - Structure in Complexity Theory Conference, June 2-5, 1986
On the complexity of loading shallow neural networks
Journal of Complexity - Special Issue on Neural Computation
Neural network design and the complexity of learning
Neural network design and the complexity of learning
Computational limitations on training sigmoid neural networks
Information Processing Letters
Feedforward nets for interpolation and classification
Journal of Computer and System Sciences
Discrete neural computation: a theoretical foundation
Discrete neural computation: a theoretical foundation
Robust trainability of single neurons
Journal of Computer and System Sciences
The complexity and approximability of finding maximum feasible subsystems of linear relations
Theoretical Computer Science
Back-propagation is not efficient
Neural Networks
The hardness of approximate optima in lattices, codes, and systems of linear equations
Journal of Computer and System Sciences - Special issue: papers from the 32nd and 34th annual symposia on foundations of computer science, Oct. 2–4, 1991 and Nov. 3–5, 1993
On the infeasibility of training neural networks with small squared errors
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
On the hardness of approximating minimization problems
Journal of the ACM (JACM)
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Hardness Results for Neural Network Approximation Problems
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
On the Difficulty of Approximately Maximizing Agreements
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
On the Hardness of Approximating Max k-Cut and Its Dual
On the Hardness of Approximating Max k-Cut and Its Dual
The computational intractability of training sigmoidal neural networks
IEEE Transactions on Information Theory
Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
Recurrent networks for structured data - A unifying approach and its properties
Cognitive Systems Research
Hi-index | 0.00 |
We consider the problem of efficient approximate learning by multi-layered feedforward circuits subject to two objective functions. First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3,5]). We show that for single hidden layer threshold circuits with n hidden nodes and varying input dimension, approximation of this ratio within a relative error c/n3, for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and Ɛ-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function. Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within every constant larger than 1 for a multilayered threshold circuit provided the input biases are zero. Furthermore, even weak approximation of this objective is almost NP-hard.