Multiple comparison procedures
Multiple comparison procedures
The nature of statistical learning theory
The nature of statistical learning theory
Kernel principal component analysis
Advances in kernel methods
Journal of Global Optimization
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Letters: Convex incremental extreme learning machine
Neurocomputing
Universal Approximation and QoS Violation Application of Extreme Learning Machine
Neural Processing Letters
OP-ELM: Theory, Experiments and a Toolbox
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Error minimized extreme learning machine with growth of hidden nodes and incremental learning
IEEE Transactions on Neural Networks
Rapid and brief communication: Evolutionary extreme learning machine
Pattern Recognition
OP-ELM: optimally pruned extreme learning machine
IEEE Transactions on Neural Networks
Composite Function Wavelet Neural Networks with Differential Evolution and Extreme Learning Machine
Neural Processing Letters
Weighting Efficient Accuracy and Minimum Sensitivity for Evolving Multi-Class Classifiers
Neural Processing Letters
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
It is well-known that single-hidden-layer feedforward networks (SLFNs) with additive models are universal approximators. However the training of these models was slow until the birth of extreme learning machine (ELM) "Huang et al. Neurocomputing 70(1---3):489---501 (2006)" and its later improvements. Before ELM, the faster algorithms for efficiently training SLFNs were gradient based ones which need to be applied iteratively until a proper model is obtained. This slow convergence implies that SLFNs are not used as widely as they could be, even taking into consideration their overall good performances. The ELM allowed SLFNs to become a suitable option to classify a great number of patterns in a short time. Up to now, the hidden nodes were randomly initiated and tuned (though not in all approaches). This paper proposes a deterministic algorithm to initiate any hidden node with an additive activation function to be trained with ELM. Our algorithm uses the information retrieved from principal components analysis to fit the hidden nodes. This approach considerably decreases computational cost compared to later ELM improvements and overcomes their performance.