Machine Learning
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Expected classification error of the Fisher linear classifier with pseudo-inverse covariance matrix
Pattern Recognition Letters
Experimental evaluation of expert fusion strategies
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Randomizing Outputs to Increase Prediction Accuracy
Machine Learning
A Theoretical Study on Six Classifier Fusion Strategies
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fuzzy Sets and Systems - Featured Issue: Selected papers from ACIDCA 2000
Sum Versus Vote Fusion in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Classifier Conditional Posterior Probabilities
SSPR '98/SPR '98 Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Experiments with Classifier Combining Rules
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
An Experimental Comparison of Fixed and Trained Fusion Rules for Crisp Classifier Outputs
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Analysis of Linear and Order Statistics Combiners for Fusion of Imbalanced Classifiers
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Methods for Dynamic Classifier Selection
ICIAP '99 Proceedings of the 10th International Conference on Image Analysis and Processing
Experts' Boasting in Trainable Fusion Rules
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
A Theoretical and Experimental Analysis of Linear Combiners for Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Trainable fusion rules. I. Large sample size case
Neural Networks
Trainable fusion rules. II. Small sample-size effects
Neural Networks
Non-parametric bootstrap ensembles for detection of tumor lesions
Pattern Recognition Letters
Sign Language Recognition by Combining Statistical DTW and Independent Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Empirical Study of a Linear Regression Combiner on Multi-class Data Sets
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Optimal Classifier Fusion in a Non-Bayesian Probabilistic Framework
IEEE Transactions on Pattern Analysis and Machine Intelligence
Out-of-bag estimation of the optimal sample size in bagging
Pattern Recognition
Computational Statistics & Data Analysis
Issues in stacked generalization
Journal of Artificial Intelligence Research
Troika - An improved stacking schema for classification tasks
Information Sciences: an International Journal
On deriving the second-stage training set for trainable combiners
MCS'05 Proceedings of the 6th international conference on Multiple Classifier Systems
Switching between selection and fusion in combining classifiers: anexperiment
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Classifier fusion with interval-valued weights
Pattern Recognition Letters
Learning ensemble classifiers via restricted Boltzmann machines
Pattern Recognition Letters
Hi-index | 0.10 |
Due to the wide variety of fusion techniques available for combining multiple classifiers into a more accurate classifier, a number of good studies have been devoted to determining in what situations some fusion methods should be preferred over other ones. However, the sample size behavior of the various fusion methods has hitherto received little attention in the literature of multiple classifier systems. The main contribution of this paper is thus to investigate the effect of training sample size on their relative performance and to gain more insight into the conditions for the superiority of some combination rules. A large experiment is conducted to study the performance of some fixed and trainable combination rules for executing one- and two-level classifier fusion for different training sample sizes. The experimental results yield the following conclusions: when implementing one-level fusion to combine homogeneous or heterogeneous base classifiers, fixed rules outperform trainable ones in nearly all cases, with only one exception of merging heterogeneous classifiers for large sample size. Moreover, the best classification for any considered sample size is generally achieved by a second level of combination (namely, utilizing one fusion rule to further combine a set of ensemble classifiers with each of them constructed by fusing base classifiers). Under these circumstances, it seems that adopting different types of fusion rules (fixed or trainable) as the combiners for two levels of fusion is appropriate.