Optimizing weights in combining classifiers in natural language learning

  • Authors:
  • Seong-Bae Park;Heegeun Yoon

  • Affiliations:
  • Lab. of Machine Learning, Department of Computer Engineering, Kyungpook National University, Daegu, Korea;Lab. of Machine Learning, Department of Computer Engineering, Kyungpook National University, Daegu, Korea

  • Venue:
  • ACST'07 Proceedings of the third conference on IASTED International Conference: Advances in Computer Science and Technology
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many machine learning algorithms have their own idiosyncrasy in their generalization, even though they have been successful in various tasks. Therefore, in many real-world tasks, a committee of several diverse classifiers outperforms any single committee member. However, it is yet an open problem how to combine them in order to achieve high performance. This paper proposes a novel method based on genetic algorithms for combining multiple classifiers. The experimental results on natural language learning show that the proposed method is plausible for combining classifiers. The combination of naïve Bayes classifier, decision trees, and memory-based learning achieves on average 90.14% of accuracy for compound noun decomposition of Korean, while the base classifiers give 73.17%, 82.28%, and 86.26% of accuracy respectively.