Interpolation by ridge polynomials and its application in neural networks
Journal of Computational and Applied Mathematics - Selected papers of the international symposium on applied mathematics, August 2000, Dalian, China
Constructive approximate interpolation by neural networks
Journal of Computational and Applied Mathematics
Training a reciprocal-sigmoid classifier by feature scaling-space
Machine Learning
An error-counting network for pattern classification
Neurocomputing
ISTASC'06 Proceedings of the 6th WSEAS International Conference on Systems Theory & Scientific Computation
Sales forecasting using extreme learning machine with applications in fashion retailing
Decision Support Systems
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Constructive approximate interpolation by neural networks
Journal of Computational and Applied Mathematics
Approximate interpolation by neural networks with the inverse multiquadric functions
ISICA'07 Proceedings of the 2nd international conference on Advances in computation and intelligence
Extreme support vector machine classifier
PAKDD'08 Proceedings of the 12th Pacific-Asia conference on Advances in knowledge discovery and data mining
Constructive approximation to multivariate function by decay RBF neural network
IEEE Transactions on Neural Networks
Two-stage extreme learning machine for regression
Neurocomputing
Approximation capability of interpolation neural networks
Neurocomputing
The multidimensional function approximation based on constructive wavelet RBF neural network
Applied Soft Computing
Developing measurement selection strategy for neural network models
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
A novel learning algorithm for feedforward neural networks
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Brief paper: An optimized discrete neural network in embedded systems for road recognition
Engineering Applications of Artificial Intelligence
A Computer Aided Diagnosis System for Thyroid Disease Using Extreme Learning Machine
Journal of Medical Systems
Displacement prediction model of landslide based on ensemble of extreme learning machine
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part IV
Spike-timing-dependent construction
Neural Computation
Generalized single-hidden layer feedforward networks
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Meta-ELM: ELM with ELM hidden nodes
Neurocomputing
Aircraft recognition using modular extreme learning machine
Neurocomputing
Hi-index | 0.00 |
It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (xi,ti) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen “almost” arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function. This paper rigorously proves that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples (xi,ti) with zero error. The previous method of arbitrarily choosing weights is not feasible for any SLFN. The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFNs with any such bounded nonlinear activation function as opposed to iterative training algorithms in the literature