Original Contribution: Stacked generalization
Neural Networks
Experiments with Classifier Combining Rules
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Multiple Classifier Combination Methodologies for Different Output Levels
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
An Experimental Comparison of Fixed and Trained Fusion Rules for Crisp Classifier Outputs
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Reduction of the Boasting Bias of Linear Experts
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Issues in stacked generalization
Journal of Artificial Intelligence Research
Heterogeneous stacking for classification-driven watershed segmentation
EURASIP Journal on Advances in Signal Processing
Unsupervised Hierarchical Weighted Multi-segmenter
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
An experimental study of one- and two-level classifier fusion for different sample sizes
Pattern Recognition Letters
Hi-index | 0.00 |
Unlike fixed combining rules, the trainable combiner is applicable to ensembles of diverse base classifier architectures with incomparable outputs. The trainable combiner, however, requires the additional step of deriving a second-stage training dataset from the base classifier outputs. Although several strategies have been devised, it is thus far unclear which is superior for a given situation. In this paper we investigate three principal training techniques, namely the re-use of the training dataset for both stages, an independent validation set, and the stacked generalization. On experiments with several datasets we have observed that the stacked generalization outperforms the other techniques in most situations, with the exception of very small sample sizes, in which the re-using strategy behaves better. We illustrate that the stacked generalization introduces additional noise to the second-stage training dataset, and should therefore be bundled with simple combiners that are insensitive to the noise. We propose an extension of the stacked generalization approach which significantly improves the combiner robustness.