Constructing arrangements of lines and hyperplanes with applications
SIAM Journal on Computing
Theory of linear and integer programming
Theory of linear and integer programming
Approximation capabilities of multilayer feedforward networks
Neural Networks
Toward efficient agnostic learning
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning in the presence of malicious errors
SIAM Journal on Computing
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Bounds for the computational power and learning complexity of analog neural nets
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Rate of approximation results motivated by robust neural network learning
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
On efficient agnostic learning of linear combinations of basis functions
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Learning of depth two neural networks with constant fan-in at the hidden nodes (extended abstract)
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Hardness results for neural network approximation problems
Theoretical Computer Science
Hi-index | 0.00 |
We describe an efficient algorithm for learning from examples a class of feedforward neural networks with real inputs and outputs in a real-value generalization of the Probably Approximately Correct (PAC) model. These networks can approximate an arbitrary function with an arbitrary precision. The learning algorithm can accommodate a fairly general worst-case noise model. The main improvement over previous work is that the running time of the algorithm grows only polynomially as the size of the target network increases (there is still an exponential dependence on the dimension of the input space, however). The main computational tool is an iterative “loading” algorithm which adds new hidden units to the hypothesis network sequentially. This avoids the difficult problem of optimizing the weights of all units simultaneously.