A resource-allocating network for function interpolation
Neural Computation
A function estimation approach to sequential learning with neural networks
Neural Computation
Matrix computations (3rd ed.)
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Approximation by fully complex multilayer perceptrons
Neural Computation
Letters: Convex incremental extreme learning machine
Neurocomputing
Error minimized extreme learning machine with growth of hidden nodes and incremental learning
IEEE Transactions on Neural Networks
Letters: Fully complex extreme learning machine
Neurocomputing
An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Neural Networks
A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks
IEEE Transactions on Neural Networks
A Group Selection Evolutionary Extreme Learning Machine approach for Time-Variant Neural Networks
Proceedings of the 2011 conference on Neural Nets WIRN10: Proceedings of the 20th Italian Workshop on Neural Nets
A multi-objective micro genetic ELM algorithm
Neurocomputing
Hi-index | 0.00 |
Online Sequential Extreme Learning Machine (OS-ELM) proposed by Liang et al [1] is a faster and more accurate online sequential learning algorithm as compared to other current sequential algorithms. It can learn data one-by-one or chunk-by-chunk with fixed or varying chunk size. However, there is one of the remaining challenges for OS-ELM that it could not determine the optimal network structure automatically. In this paper, we propose a Constructive Enhancement for OS-ELM (CEOS-ELM), which can add random hidden nodes one-by-one or group-by-group with fixed or varying group size. CEOS-ELM is searching for the optimal network architecture during the sequential learning process, and it can handle both additive and radial basis function (RBF) hidden nodes. The optimal number of hidden nodes can be obtained automatically after training. The simulation results show that with CEOS-ELM, the network can achieve comparable generalization performance with OS-ELM and more compact network structure.