Original Contribution: Stacked generalization
Neural Networks
Machine Learning
Bias/variance analyses of mixtures-of-experts architectures
Neural Computation
Voting over Multiple Condensed Nearest Neighbors
Artificial Intelligence Review - Special issue on lazy learning
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Ensembling neural networks: many could be better than all
Artificial Intelligence
A Principal Components Approach to Combining Regression Estimates
Machine Learning
Using Correspondence Analysis to Combine Classifiers
Machine Learning
On the Boosting Pruning Problem
ECML '00 Proceedings of the 11th European Conference on Machine Learning
Combining Nearest Neighbor Classifiers Through Multiple Feature Subsets
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Combining Multiple Representations and Classifiers for Pen-based Handwritten Digit Recognitio
ICDAR '97 Proceedings of the 4th International Conference on Document Analysis and Recognition
The ``Test and Select'' Approach to Ensemble Combination
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Methods for Designing Multiple Classifier Systems
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Ensemble selection from libraries of models
ICML '04 Proceedings of the twenty-first international conference on Machine learning
A Theoretical and Experimental Analysis of Linear Combiners for Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Cost-conscious classifier ensembles
Pattern Recognition Letters
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Ensemble Pruning Via Semi-definite Programming
The Journal of Machine Learning Research
Engineering multiversion neural-net systems
Neural Computation
Induction of multiple fuzzy decision trees based on rough set technique
Information Sciences: an International Journal
Incremental construction of classifier and discriminant ensembles
Information Sciences: an International Journal
Issues in stacked generalization
Journal of Artificial Intelligence Research
Improving generalization of fuzzy IF-THEN rules by maximizing fuzzy entropy
IEEE Transactions on Fuzzy Systems
Ensemble strategies with adaptive evolutionary programming
Information Sciences: an International Journal
Classifier combination based on confidence transformation
Pattern Recognition
Score normalization in multimodal biometric systems
Pattern Recognition
Information Sciences: an International Journal
Ensemble of niching algorithms
Information Sciences: an International Journal
A dynamic classifier ensemble selection approach for noise data
Information Sciences: an International Journal
Ensemble of feature sets and classification algorithms for sentiment classification
Information Sciences: an International Journal
A novel fuzzy Dempster-Shafer inference system for brain MRI segmentation
Information Sciences: an International Journal
Embedded local feature selection within mixture of experts
Information Sciences: an International Journal
Hi-index | 0.07 |
In practice, classifiers in an ensemble are not independent. This paper is the continuation of our previous work on ensemble subset selection [A. Ulas, M. Semerci, O.T. Yildiz, E. Alpaydin, Incremental construction of classifier and discriminant ensembles, Information Sciences, 179 (9) (2009) 1298-1318] and has two parts: first, we investigate the effect of four factors on correlation: (i) algorithms used for training, (ii) hyperparameters of the algorithms, (iii) resampled training sets, (iv) input feature subsets. Simulations using 14 classifiers on 38 data sets indicate that hyperparameters and overlapping training sets have higher effect on positive correlation than features and algorithms. Second, we propose postprocessing before fusing using principal component analysis (PCA) to form uncorrelated eigenclassifiers from a set of correlated experts. Combining the information from all classifiers may be better than subset selection where some base classifiers are pruned before combination, because using all allows redundancy.