Machine Learning
Technometrics
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Letters: Convex incremental extreme learning machine
Neurocomputing
OP-ELM: Theory, Experiments and a Toolbox
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Error minimized extreme learning machine with growth of hidden nodes and incremental learning
IEEE Transactions on Neural Networks
Linear Regression Analysis: Theory and Computing
Linear Regression Analysis: Theory and Computing
Multiresponse sparse regression with application to multidimensional scaling
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Fast orthogonal least squares algorithm for efficient subset modelselection
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
Orthogonal least squares learning algorithm for radial basis function networks
IEEE Transactions on Neural Networks
Learning capability and storage capacity of two-hidden-layer feedforward networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Two-stage extreme learning machine for regression
Neurocomputing
On the performance of the µ-GA extreme learning machines in regression problems
IWANN'11 Proceedings of the 11th international conference on Artificial neural networks conference on Advances in computational intelligence - Volume Part II
Extending extreme learning machine with combination layer
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
A study on the randomness reduction effect of extreme learning machine with ridge regression
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Fast sparse approximation of extreme learning machine
Neurocomputing
Hi-index | 0.01 |
In this paper, we attempt to address the architectural design of ELM regressor by applying a constructive method on the basis of ELM algorithm. After the nonlinearities of ELM network are fixed by randomly generating the parameters, the network will correspond to a linear regression model. The selection of hidden nodes can then be regarded as a subset model selection in linear regression. The proposed constructive hidden nodes selection for ELM (referred to as CS-ELM) selects the optimal number of hidden nodes when the unbiased risk estimation based criterion C"P reaches the minimum value. A comparison of the proposed CS-ELM with other model selection algorithms of ELM is evaluated on several real benchmark regression applications. And the empirical study shows that CS-ELM leads to a compact network structure automatically.