Neural network design
Pattern recognition using neural networks: theory and algorithms for engineers and scientists
Pattern recognition using neural networks: theory and algorithms for engineers and scientists
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Letters: Convex incremental extreme learning machine
Neurocomputing
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on cybernetics and cognitive informatics
Error minimized extreme learning machine with growth of hidden nodes and incremental learning
IEEE Transactions on Neural Networks
Letters: Fully complex extreme learning machine
Neurocomputing
Real-time learning capability of neural networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks (SLFNs) which need not be neuron alike and perform well in both regression and classification applications. An active topic in ELMs is how to automatically determine network architectures for given applications. In this paper, we propose an extreme learning machine with adaptive growth of hidden nodes and incremental updating of output weights by an errorminimization-based method (AIE-ELM). AIE-ELM grows the randomly generated hidden nodes in an adaptive way in the sense that the existing hidden nodes may be replaced by some newly generated hidden nodes with better performance rather than always keeping those existing ones in other incremental ELMs. The output weights are updated incrementally in the same way of error minimized ELM (EM-ELM). Simulation results demonstrate and verify that our new approach can achieve a more compact network architecture than EM-ELM with better generalization performance.