Approximation capabilities of multilayer feedforward networks
Neural Networks
A resource-allocating network for function interpolation
Neural Computation
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Letters: Convex incremental extreme learning machine
Neurocomputing
Efficient agnostic learning of neural networks with bounded fan-in
IEEE Transactions on Information Theory - Part 2
Objective functions for training new hidden units in constructive neural networks
IEEE Transactions on Neural Networks
On the optimality of neural-network approximation using incremental algorithms
IEEE Transactions on Neural Networks
On the geometric convergence of neural approximations
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Improving iris recognition through new target vectors in MLP artificial neural networks
ANNPR'12 Proceedings of the 5th INNS IAPR TC 3 GIRPR conference on Artificial Neural Networks in Pattern Recognition
Hi-index | 0.00 |
Recently, a convex incremental algorithm (CI-ELM) has been proposed in Huang and Chen (Neurocomputing 70:3056---3062, 2007), which randomly chooses hidden neurons and then analytically determines the output weights connecting with the hidden layer and the output layer. Though hidden neurons are generated randomly, the network constructed by CI-ELM is still based on the principle of universal approximation. The random approximation theory breaks through the limitation of most conventional theories, eliminating the need for tuning hidden neurons. However, due to the random characteristic, some of the neurons contribute little to decrease the residual error, which eventually increase the complexity and computation of neural networks. Thus, CI-ELM cannot precisely give out its convergence rate. Based on Lee's results (Lee et al., IEEE Trans Inf Theory 42(6):2118---2132, 1996), we first show the convergence rate of a maximum CI-ELM, and then systematically analyze the convergence rate of an enhanced CI-ELM. Different from CI-ELM, the hidden neurons of the two algorithms are chosen by following the maximum or optimality principle under the same structure as CI-ELM. Further, the proof process also demonstrates that our algorithms achieve smaller residual errors than CI-ELM. Since the proposed neural networks remove these "useless" neurons, they improve the efficiency of neural networks. The experimental results on benchmark regression problems will support our conclusions.