Original Contribution: Stacked generalization
Neural Networks
Decision Combination in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining the results of several neural network classifiers
Neural Networks
Machine Learning
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sum Versus Vote Fusion in Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition Letters
A Brief Introduction to Boosting
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
We experiment with different fusion methods when bagging k-NN classifiers under various conditions. Experiments with four types of bagging are made at four training set sizes, using two metrics. The aim is to find the conditions for an optimum bagging performance. Additionally we aim to find the best rule under the specified conditions. We compare the performance of the different fusion strategies under each condition. Fusion methods used are Sum, Modified Product (MProduct) [2], Vote and Moderation [1]. Results show that the performance depends on the data used, number of nearest neighbors (k), number of fused classifiers and size of training set. Over all the three rules derived from Product show a close performance, while Vote shows an opposite performance. Among the three rules Moderation either follows Sum or MProduct. Results indicate MProduct outperforms Sum at many instances. At some of these instances Sum did not outperform the single classifier while MProduct did. Moderation is the second best, while Vote is inferior especially at even numbers of k. This is an inherited weakness of Vote, where at even number of close samples ties are randomly resolved. At k=1 all rules yield similar results. There are few instances where Moderation outperforms all. In general MProduct is the best choice.