Genetic Algorithms Plus Data Structures Equals Evolution Programs
Genetic Algorithms Plus Data Structures Equals Evolution Programs
On Comparing Classifiers: Pitfalls toAvoid and a Recommended Approach
Data Mining and Knowledge Discovery
Rapid and brief communication: Evolutionary extreme learning machine
Pattern Recognition
Learning capability and storage capacity of two-hidden-layer feedforward networks
IEEE Transactions on Neural Networks
Real-time learning capability of neural networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Online learning neural tracker
Neurocomputing
Information Sciences: an International Journal
A study on the randomness reduction effect of extreme learning machine with ridge regression
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
This paper presents a performance enhancement scheme for the recently developed extreme learning machine (ELM) for multi-category sparse data classification problems. ELM is a single hidden layer neural network with good generalization capabilities and extremely fast learning capacity. In ELM, the input weights are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for sparse data classification problem depends critically on three free parameters. They are, the number of hidden neurons, the input weights and the bias values which need to be optimally chosen. Selection of these parameters for the best performance of ELM involves a complex optimization problem. In this paper, we present a new, real-coded genetic algorithm approach called 'RCGA-ELM' to select the optimal number of hidden neurons, input weights and bias values which results in better performance. Two new genetic operators called 'network based operator' and 'weight based operator' are proposed to find a compact network with higher generalization performance. We also present an alternate and less computationally intensive approach called 'sparse-ELM'. Sparse-ELM searches for the best parameters of ELM using K-fold validation. A multi-class human cancer classification problem using micro-array gene expression data (which is sparse), is used for evaluating the performance of the two schemes. Results indicate that the proposed RCGA-ELM and sparse-ELM significantly improve ELM performance for sparse multi-category classification problems.