Approximation capabilities of multilayer feedforward networks
Neural Networks
Universal approximation using radial-basis-function networks
Neural Computation
Approximation by superposition of sigmoidal and radial basis functions
Advances in Applied Mathematics
An introduction to wavelets
Feedforward nets for interpolation and classification
Journal of Computer and System Sciences
Neural networks for localized approximation
Mathematics of Computation
Simultaneous Lp-approximation order for neural networks
Neural Networks
Neural Networks for Approximation of Real Functions with the Gaussian Functions
ICNC '07 Proceedings of the Third International Conference on Natural Computation - Volume 01
Letters: Convex incremental extreme learning machine
Neurocomputing
Constructive approximate interpolation by neural networks
Journal of Computational and Applied Mathematics
Quasi-interpolation for data fitting by the radial basis functions
GMP'08 Proceedings of the 5th international conference on Advances in geometric modeling and processing
Capabilities of a four-layered feedforward neural network: four layers versus three
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Learning capability and storage capacity of two-hidden-layer feedforward networks
IEEE Transactions on Neural Networks
Multiscale approximation with hierarchical radial basis functions networks
IEEE Transactions on Neural Networks
A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation
IEEE Transactions on Neural Networks
Real-time learning capability of neural networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Best Approximation of Gaussian Neural Networks With Nodes Uniformly Spaced
IEEE Transactions on Neural Networks
Accuracy analysis for wavelet approximations
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
It is well known that single hidden layer feedforward networks with radial basis function (RBF) kernels are universal approximators when all the parameters of the networks are obtained through all kinds of algorithms. However, as observed in most neural network implementations, tuning all the parameters of the network may cause learning complicated, poor generalization, overtraining and unstable. Unlike conventional neural network theories, this brief gives a constructive proof for the fact that a decay RBF neural network with n + 1 hidden neurons can interpolate n + 1 multivariate samples with zero error. Then we prove that the given decay RBFs can uniformly approximate any continuous multivariate functions with arbitrary precision without training. The faster convergence and better generalization performance than conventional RBF algorithm, BP algorithm, extreme learning machine and support vector machines are shown by means of two numerical experiments.