The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Adaptation in natural and artificial systems
Adaptation in natural and artificial systems
Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Ensemble learning via negative correlation
Neural Networks
Ensembling neural networks: many could be better than all
Artificial Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Experiments with AdaBoost.RT, an improved boosting scheme for regression
Neural Computation
Ensembling Extreme Learning Machines
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Adaptive Ensemble Models of Extreme Learning Machines for Time Series Prediction
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
OP-ELM: optimally pruned extreme learning machine
IEEE Transactions on Neural Networks
Making use of population information in evolutionary artificialneural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Genetic ensemble of extreme learning machine
Neurocomputing
Fast decorrelated neural network ensembles with random weights
Information Sciences: an International Journal
Hybrid extreme rotation forest
Neural Networks
Hi-index | 0.01 |
Ensemble learning aims to improve the generalization power and the reliability of learner models through sampling and optimization techniques. It has been shown that an ensemble constructed by a selective collection of base learners outperforms favorably. However, effective implementation of such an ensemble from a given learner pool is still an open problem. This paper presents an evolutionary approach for constituting extreme learning machine (ELM) ensembles. Our proposed algorithm employs the model diversity as fitness function to direct the selection of base learners, and produces an optimal solution with ensemble size control. A comprehensive comparison is carried out, where the basic ELM is used to generate a set of neural networks and 12 benchmarked regression datasets are employed in simulations. Our reporting results demonstrate that the proposed method outperforms other ensembling techniques, including simple average, bagging and adaboost, in terms of both effectiveness and efficiency.