Machine Learning
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Soft combination of neural classifiers: a comparative study
Pattern Recognition Letters
Generating classifier outputs of fixed accuracy and diversity
Pattern Recognition Letters
IEEE Transactions on Pattern Analysis and Machine Intelligence
Complexity of Classification Problems and Comparative Advantages of Combined Classifiers
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Voting Nearest-Neighbor Subclassifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
On Combining Classifiers by Relaxation for Natural Textures in Images
HAIS '08 Proceedings of the 3rd international workshop on Hybrid Artificial Intelligence Systems
Expert Systems with Applications: An International Journal
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part II
Making Diversity Enhancement Based on Multiple Classifier System by Weight Tuning
Neural Processing Letters
Hi-index | 0.00 |
When a Multiple Classifier System is employed, one of the most popular methods to accomplish the classifier fusion is the simple majority voting. However, when the performance of the ensemble members is not uniform, the efficiency of this type of voting is affected negatively. In this paper, a comparison between simple and weighted voting (both dynamic and static) is presented. New weighting methods, mainly in the direction of the dynamic approach, are also introduced. Experimental results with several real-problem data sets demonstrate the advantages of the weighting strategies over the simple voting scheme. When comparing the dynamic and the static approaches, results show that the dynamic weighting is superior to the static strategy in terms of classification accuracy.