Multilayer feedforward networks are universal approximators
Neural Networks
On the geometry of feedforward neural network error surfaces
Neural Computation
Natural gradient works efficiently in learning
Neural Computation
Curves and surfaces for CAGD: a practical guide
Curves and surfaces for CAGD: a practical guide
Talking Nets: An Oral History of Neurocomputing
Talking Nets: An Oral History of Neurocomputing
The design of self-organizing polynomial neural networks
Information Sciences—Informatics and Computer Science: An International Journal
Functionally equivalent feedforward neural networks
Neural Computation
The multilayer perceptron as an approximation to a Bayes optimal discriminant function
IEEE Transactions on Neural Networks
Neural network classification: a Bayesian interpretation
IEEE Transactions on Neural Networks
Information Sciences: an International Journal
Fast and Stable Learning Utilizing Singular Regions of Multilayer Perceptron
Neural Processing Letters
Hi-index | 0.00 |
Multilayer perceptron networks whose outputs consist of affine combinations of hidden units using the tanh activation function are universal function approximators and are used for regression, typically by reducing the MSE with backpropagation. We present a neural network weight learning algorithm that directly positions the hidden units within input space by numerically analyzing the curvature of the output surface. Our results show that under some sampling requirements, this method can reliably recover the parameters of a neural network used to generate a data set.