Multilayer feedforward networks are universal approximators
Neural Networks
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Approximation and Estimation Bounds for Artificial Neural Networks
Machine Learning - Special issue on computational learning theory
Fundamentals of neural networks: architectures, algorithms, and applications
Fundamentals of neural networks: architectures, algorithms, and applications
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Fundamentals of Artificial Neural Networks
Fundamentals of Artificial Neural Networks
Some new results on neural network approximation
Neural Networks
A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this study we conduct fair and systematic comparisons of two types of neural networks: single- and multiple-hidden-layer networks. For fair comparisons, we ensure that the two types use the same activation and output functions and have the same numbers of nodes, feedforward connections, and parameters. The networks are trained by the gradient descent algorithm to approximate linear and quadratic functions, and we examine their convergence properties. We show that, in both linear and quadratic cases, the learning rate is more flexible for networks with a single hidden layer than for those with multiple hidden layers. We also show that single-hidden-layer networks converge faster to linear target functions compared to multiple-hidden-layer networks.