Multilayer feedforward networks are universal approximators
Neural Networks
Approximation by ridge functions and neural networks with one hidden layer
Journal of Approximation Theory
Feedforward nets for interpolation and classification
Journal of Computer and System Sciences
Letters: Convex incremental extreme learning machine
Neurocomputing
Constructive approximate interpolation by neural networks
Journal of Computational and Applied Mathematics
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
It is well-known that single hidden layer feed-forward neural networks (SLFNs) with at most n hidden neurons can learn n distinct samples with zero error, and the weights connecting the input neurons and the hidden neurons and the hidden node thresholds can be chosen randomly. Namely, for n distinct samples, there exist SLFNs with n hidden neurons that interpolate them. These networks are called exact interpolation networks for the samples. However, for some approximated target functions (as continuous or integrable functions) not all exact interpolation networks have good approximation effect. This paper, by using a functional approach, rigorously proves that for given distinct samples there exists an SLFN which not only exactly interpolates samples but also near best approximates the target function.