Machine Learning
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Ensembling neural networks: many could be better than all
Artificial Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Diversity versus Quality in Classification Ensembles Based on Feature Selection
ECML '00 Proceedings of the 11th European Conference on Machine Learning
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Adaptive Ensemble Models of Extreme Learning Machines for Time Series Prediction
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
AICI '09 Proceedings of the 2009 International Conference on Artificial Intelligence and Computational Intelligence - Volume 01
A support vector machine ensemble for cancer classification using gene expression data
ISBRA'07 Proceedings of the 3rd international conference on Bioinformatics research and applications
Voting based extreme learning machine
Information Sciences: an International Journal
Dynamic ensemble extreme learning machine based on sample entropy
Soft Computing - A Fusion of Foundations, Methodologies and Applications - Special Issue on Extreme Learning Machines (ELM 2011) Hangzhou, China, December 6 – 8, 2011
Hi-index | 0.01 |
Extreme learning machine (ELM) has salient features such as fast learning speed and excellent generalization performance. However, a single extreme learning machine is unstable in data classification. To overcome this drawback, more and more researchers consider using ensemble of ELMs. This paper proposes a method integrating voting-based extreme learning machines (V-ELMs) with dissimilarity (D-ELM). First, based on different dissimilarity measures, we remove a number of ELMs from the ensemble pool. Then, the remaining ELMs are grouped as an ensemble classifier by majority voting. Finally we use disagreement measure and double-fault measure to validate the D-ELM. The theoretical analysis and experimental results on gene expression data demonstrate that (1) the D-ELM can achieve better classification accuracy with less number of ELMs; (2) the double-fault measure based D-ELM (DF-D-ELM) performs better than disagreement measure based D-ELM (D-D-ELM).