Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Ill-conditioning in neural network training problems
SIAM Journal on Scientific Computing
Approximation by multiinteger translates of functions having global support
Journal of Approximation Theory
Training with noise is equivalent to Tikhonov regularization
Neural Computation
Neural networks for optimal approximation of smooth and analytic functions
Neural Computation
Some new results on neural network approximation
Neural Networks
Solution of nonlinear ordinary differential equations by feedforward neural networks
Mathematical and Computer Modelling: An International Journal
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Orthogonal least squares learning algorithm for radial basis function networks
IEEE Transactions on Neural Networks
Curvature-driven smoothing: a learning algorithm for feedforward networks
IEEE Transactions on Neural Networks
Functional approximation by feed-forward networks: a least-squares approach to generalization
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Stochastic choice of basis functions in adaptive function approximation and the functional-link net
IEEE Transactions on Neural Networks
Hi-index | 0.98 |
The adaptive data-driven emulation and control of mechanical systems are popular applications of artificial neural networks in engineering. However, multilayer perceptron training is an ill-posed nonlinear optimization problem. This paper explores a method to constrain network parameters so that conventional computational techniques for function approximation can be used during training. This was accomplished by forming local basis functions which provide accurate approximation and stable evaluation of the network parameters. It is noted that this approach is quite general and does not violate the principles of network architecture. By employing the concept of shift-invariant subspaces, this approach yields a new and more robust error condition for feedforward artificial neural networks and allows one to both characterize and control the accuracy of the local bases formed. The two methods used are: (1) adding bases while altering their shape and keeping their spacing constant, and (2) adding bases while altering their shape and decreasing their spacing in a coupled fashion. Numerical examples demonstrate the usefulness of the proposed approximation of functions and their derivatives.