A resource-allocating network for function interpolation
Neural Computation
Approximation by fully complex multilayer perceptrons
Neural Computation
Letters: Convex incremental extreme learning machine
Neurocomputing
Letters: Fully complex extreme learning machine
Neurocomputing
Learning capability and storage capacity of two-hidden-layer feedforward networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Real-time learning capability of neural networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks
IEEE Transactions on Neural Networks
A constructive enhancement for online sequential extreme learning machine
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
A new online learning algorithm for structure-adjustable extreme learning machine
Computers & Mathematics with Applications
Two-stage extreme learning machine for regression
Neurocomputing
Incremental-based extreme learning machine algorithms for time-variant neural networks
ICIC'10 Proceedings of the 6th international conference on Advanced intelligent computing theories and applications: intelligent computing
A Group Selection Evolutionary Extreme Learning Machine approach for Time-Variant Neural Networks
Proceedings of the 2011 conference on Neural Nets WIRN10: Proceedings of the 20th Italian Workshop on Neural Nets
ELM-Based time-variant neural networks with incremental number of output basis functions
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part I
AIS'11 Proceedings of the Second international conference on Autonomous and intelligent systems
Batch intrinsic plasticity for extreme learning machines
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Enhanced extreme learning machine with modified gram-schmidt algorithm
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
Orthogonal least squares based on singular value decomposition for spare basis selection
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part I
A new automatic target recognition system based on wavelet extreme learning machine
Expert Systems with Applications: An International Journal
Enhanced combination modeling method for combustion efficiency in coal-fired boilers
Applied Soft Computing
A rank reduced matrix method in extreme learning machine
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
A multi-objective micro genetic ELM algorithm
Neurocomputing
PCA-ELM: A Robust and Pruned Extreme Learning Machine Approach Based on Principal Component Analysis
Neural Processing Letters
Parallel Chaos Search Based Incremental Extreme Learning Machine
Neural Processing Letters
Extreme learning machine: a robust modeling technique? yes!
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Extending extreme learning machine with combination layer
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
A study on the randomness reduction effect of extreme learning machine with ridge regression
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Fast sparse approximation of extreme learning machine
Neurocomputing
Hi-index | 0.00 |
One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized single-hidden-layer feedforward networks (SLFNs) which need not be neural alike. This approach referred to as error minimized extreme learning machine (EM-ELM) can add random hidden nodes to SLFNs one by one or group by group (with varying group size). During the growth of the networks, the output weights are updated incrementally. The convergence of this approach is proved in this brief as well. Simulation results demonstrate and verify that our new approach is much faster than other sequential/incremental/ growing algorithms with good generalization performance.