Structural complexity 1
Neural network design and the complexity of learning
Neural network design and the complexity of learning
Finiteness results for sigmoidal “neural” networks
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Feedforward nets for interpolation and classification
Journal of Computer and System Sciences
The roots of backpropagation: from ordered derivatives to neural networks and political forecasting
The roots of backpropagation: from ordered derivatives to neural networks and political forecasting
Robust trainability of single neurons
Journal of Computer and System Sciences
The complexity and approximability of finding maximum feasible subsystems of linear relations
Theoretical Computer Science
Back-propagation is not efficient
Neural Networks
The hardness of approximate optima in lattices, codes, and systems of linear equations
Journal of Computer and System Sciences - Special issue: papers from the 32nd and 34th annual symposia on foundations of computer science, Oct. 2–4, 1991 and Nov. 3–5, 1993
Complexity and real computation
Complexity and real computation
On the infeasibility of training neural networks with small squared errors
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
On the hardness of approximating minimization problems
Journal of the ACM (JACM)
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Introduction to Algorithms
Hardness results for neural network approximation problems
Theoretical Computer Science
Complexity of Network Training for Classes of Neural Networks
ALT '95 Proceedings of the 6th International Conference on Algorithmic Learning Theory
On the Hardness of Approximating Max k-Cut and Its Dual
On the Hardness of Approximating Max k-Cut and Its Dual
The computational intractability of training sigmoidal neural networks
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We deal with the problem of efficient learning of feedforward neural networks. First, we consider the objective to maximize the ratio of correctly classified points compared to the size of the training set. We show that it is NP-hard to approximate the ratio within some constant relative error if architectures with varying input dimension, one hidden layer, and two hidden neurons are considered where the activation function in the hidden layer is the sigmoid function, and the situation of epsilon-separation is assumed, or the activation function is the semilinear function. For single hidden layer threshold networks with varying input dimension and n hidden neurons, approximation within a relative error depending on n is NP-hard even if restricted to situations where the number of examples is limited with respect to n.Afterwards, we consider the objective to minimize the failure ratio in the presence of misclassification errors. We show that it is NP-hard to approximate the failure ratio within any positive constant for a multilayered threshold network with varying input dimension and a fixed number of neurons in the hidden layer if the thresholds of the neurons in the first hidden layer are zero. Furthermore, even obtaining weak approximations is almost NP-hard in the same situation.