The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Experiments with AdaBoost.RT, an improved boosting scheme for regression
Neural Computation
Adaptive Ensemble Models of Extreme Learning Machines for Time Series Prediction
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Rapid and brief communication: Evolutionary extreme learning machine
Pattern Recognition
IEEE Transactions on Information Theory
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Evolutionary selection extreme learning machine optimization for regression
Soft Computing - A Fusion of Foundations, Methodologies and Applications - Special Issue on Extreme Learning Machines (ELM 2011) Hangzhou, China, December 6 – 8, 2011
Hi-index | 0.01 |
Extreme learning machine (ELM) was proposed as a new learning algorithm to train single-hidden-layer feedforward neural networks (SLFNs). ELM has been proven to perform in high efficiency, however, due to the random determination of parameters for hidden nodes, some un-optimal parameters may be generated to influence the generalization performance and stability. Moreover, ELM may suffer from overtraining problem as the entire training dataset is used to minimize training error. In this paper, a hybrid model is proposed to alleviate such weaknesses of ELM. The model adopts genetic algorithms (GAs) to produce a group of candidate networks first, and according to a specific ranking strategy, some of the networks are selected to ensemble a new network. To verify the performance of our method, empirical comparisons were carried out with the canonical ELM, E-ELM, simple ensemble, EE-ELM, EN-ELM, Bagging and Adaboost to solve both regression and classification problems. The results have shown that our method is able to generate more robust networks with better generalization performance.