Robust trainability of single neurons
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Training a sigmoidal node is hard
Neural Computation
Training a single sigmoidal neuron is hard
Neural Computation
Computational complexity of neural networks: a survey
Nordic Journal of Computing
Structural Complexity and Neural Networks
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Minimizing the Quadratic Training Error of a Sigmoid Neuron Is Hard
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
Computation in a distributed information market
Proceedings of the 4th ACM conference on Electronic commerce
Computation in a distributed information market
Theoretical Computer Science - Game theory meets theoretical computer science
Parallel computations and committee constructions
Automation and Remote Control
International Journal of Systems Science
Optimization and Knowledge-Based Technologies
Informatica
Automation and Remote Control
Pattern Recognition and Image Analysis
Pattern Recognition and Image Analysis
Hi-index | 0.00 |
We consider the computational complexity of learning by neural nets. We are interested in how hard it is to design appropriate neural net architectures and to train neural nets for general and specialized learning tasks. Our main result shows that the training problem for 2-cascade neural nets (which have only two non-input nodes, one of which is hidden) is {\mathscr NP}-complete, which implies that finding an optimal net (in terms of the number of non-input units) that is consistent with a set of examples is also {\mathscr NP}-complete. This result also demonstrates a surprising gap between the computational complexities of one-node (perceptron) and two-node neural net training problems, since the perceptron training problem can be solved in polynomial time by linear programming techniques. We conjecture that training a k-cascade neural net, which is a classical threshold network training problem, is also {\mathscr NP}-complete, for each fixed k ≥ 2. We also show that the problem of finding an optimal perceptron (in terms of the number of non-zero weights) consistent with a set of training examples is {\mathscr NP}-hard.Our neural net learning model encapsulates the idea of modular neural nets, which is a popular approach to overcoming the scaling problem in training neural nets. We investigate how much easier the training problem becomes if the class of concepts to be learned is known a priori and the net architecture is allowed to be sufficiently non-optimal. Finally, we classify several neural net optimization problems within the polynomial-time hierarchy.