Foundations of genetic algorithms
Foundations of genetic algorithms
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms
Machine Learning
Machine Learning
Evolutionary Learning of Modular Neural Networks withGenetic Programming
Applied Intelligence
A Boosted Maximum Entropy Model for Learning Text Chunking
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Text chunking by combining hand-crafted rules and memory-based learning
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Hi-index | 0.00 |
Many machine learning algorithms have their own idiosyncrasy in their generalization, even though they have been successful in various tasks. Therefore, in many real-world tasks, a committee of several diverse classifiers outperforms any single committee member. However, it is yet an open problem how to combine them in order to achieve high performance. This paper proposes a novel method based on genetic algorithms for combining multiple classifiers. The experimental results on natural language learning show that the proposed method is plausible for combining classifiers. The combination of naïve Bayes classifier, decision trees, and memory-based learning achieves on average 90.14% of accuracy for compound noun decomposition of Korean, while the base classifiers give 73.17%, 82.28%, and 86.26% of accuracy respectively.