Neural networks for pattern recognition
Neural networks for pattern recognition
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Ensembling neural networks: many could be better than all
Artificial Intelligence
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Letters: Convex incremental extreme learning machine
Neurocomputing
OP-ELM: Theory, Experiments and a Toolbox
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Error minimized extreme learning machine with growth of hidden nodes and incremental learning
IEEE Transactions on Neural Networks
Letters: Fully complex extreme learning machine
Neurocomputing
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
OP-ELM: optimally pruned extreme learning machine
IEEE Transactions on Neural Networks
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Multiresponse sparse regression with application to multidimensional scaling
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper is described the original (basic) Extreme Learning Machine (ELM). Properties like robustness and sensitivity to variable selection are studied. Several extensions of the original ELM are then presented and compared. Firstly, Tikhonov-Regularized Optimally-Pruned Extreme Learning Machine (TROP-ELM) is summarized as an improvement of the Optimally-Pruned Extreme Learning Machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM. Secondly, a Methodology to Linearly Ensemble ELM (ELM-ELM) is presented in order to improve the performance of the original ELM. These methodologies (TROP-ELM and ELM-ELM) are tested against state of the art methods such as Support Vector Machines or Gaussian Processes and the original ELM and OP-ELM, on ten different data sets. A specific experiment to test the sensitivity of these methodologies to variable selection is also presented.