Empirical study on weighted voting multiple classifiers

  • Authors:
  • Yanmin Sun;Mohamed S. Kamel;Andrew K. C. Wong

  • Affiliations:
  • Pattern Analysis and Machine Intelligence Lab, Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada;Pattern Analysis and Machine Intelligence Lab, Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada;Pattern Analysis and Machine Intelligence Lab, Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada

  • Venue:
  • ICAPR'05 Proceedings of the Third international conference on Advances in Pattern Recognition - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Combining multiple classifiers is expected to increase classification accuracy. Research on combination strategies of multiple classifiers becomes a popular topic. For a crisp classifier, which returns a discrete class label instead of a set of real-valued probabilities respecting to every classes, the often used combination method is majority voting. Both majority and weighted majority voting are classifier-based voting schemes, which provide a certain base classifier with an identical confidence in voting. However, each classifier should have different voting priorities with respect to its learning space. This differences can not be reflected by classifier-based voting strategy. In this paper, we propose another two voting strategies in an effort to take such differences into consideration. We apply the AdaBoost algorithm to generate multiple classifiers and vary its voting strategy. Then, the prediction ability of each voting strategy is tested and compared on 8 datasets taken from UCI Machine Learning Repository. The experimental results show that one of the proposed voting strategies, namely sample-based voting scheme, achieves better performance in view of classification accuracy.