Approximation capabilities of multilayer feedforward networks
Neural Networks
A resource-allocating network for function interpolation
Neural Computation
Universal approximation using radial-basis-function networks
Neural Computation
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Approximation by fully complex multilayer perceptrons
Neural Computation
Letters: Fully complex extreme learning machine
Neurocomputing
The wavelet transform, time-frequency localization and signal analysis
IEEE Transactions on Information Theory
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Hinging hyperplanes for regression, classification, and function approximation
IEEE Transactions on Information Theory
Objective functions for training new hidden units in constructive neural networks
IEEE Transactions on Neural Networks
Real-time learning capability of neural networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks
IEEE Transactions on Neural Networks
Convergence analysis of convex incremental neural networks
Annals of Mathematics and Artificial Intelligence
Evolutionary product-unit neural networks classifiers
Neurocomputing
Brief paper: An adaptive optimization scheme with satisfactory transient performance
Automatica (Journal of IFAC)
Approximation capabilities of multilayer fuzzy neural networks on the set of fuzzy-valued functions
Information Sciences: an International Journal
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on cybernetics and cognitive informatics
Large scale nonlinear control system fine-tuning through learning
IEEE Transactions on Neural Networks
Error minimized extreme learning machine with growth of hidden nodes and incremental learning
IEEE Transactions on Neural Networks
A constructive enhancement for online sequential extreme learning machine
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
POFGEC: growing neural network of classifying potential function generators
International Journal of Knowledge Engineering and Soft Data Paradigms
OP-KNN: method and applications
Advances in Artificial Neural Systems
Constructive approximation to multivariate function by decay RBF neural network
IEEE Transactions on Neural Networks
Two-stage extreme learning machine for regression
Neurocomputing
Ordinal extreme learning machine
Neurocomputing
Approximation capability of interpolation neural networks
Neurocomputing
The multidimensional function approximation based on constructive wavelet RBF neural network
Applied Soft Computing
AIS'11 Proceedings of the Second international conference on Autonomous and intelligent systems
Face recognition based on kernelized extreme learning machine
AIS'11 Proceedings of the Second international conference on Autonomous and intelligent systems
Voting based extreme learning machine
Information Sciences: an International Journal
A new automatic target recognition system based on wavelet extreme learning machine
Expert Systems with Applications: An International Journal
Modeling spectral data based on mutual information and kernel extreme learning machines
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
Applying least angle regression to ELM
Canadian AI'12 Proceedings of the 25th Canadian conference on Advances in Artificial Intelligence
Weighted extreme learning machine for imbalance learning
Neurocomputing
Comparing studies of learning methods for human face gender recognition
CCBR'12 Proceedings of the 7th Chinese conference on Biometric Recognition
PCA-ELM: A Robust and Pruned Extreme Learning Machine Approach Based on Principal Component Analysis
Neural Processing Letters
Parallel Chaos Search Based Incremental Extreme Learning Machine
Neural Processing Letters
Extreme learning machine: a robust modeling technique? yes!
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Neural networks letter: Comments on the "No-Prop" algorithm
Neural Networks
Meta-ELM: ELM with ELM hidden nodes
Neurocomputing
Clustering in extreme learning machine feature space
Neurocomputing
Fast sparse approximation of extreme learning machine
Neurocomputing
Hybrid extreme rotation forest
Neural Networks
Learning to Rank with Extreme Learning Machine
Neural Processing Letters
Applications of Hybrid Extreme Rotation Forests for image segmentation
International Journal of Hybrid Intelligent Systems
Hi-index | 0.02 |
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879-892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs f"n(x)=@?i=1n@b"iG(x,a"i,b"i) can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such ''generalized'' SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.