COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
C4.5: programs for machine learning
C4.5: programs for machine learning
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Informative sampling for large unbalanced data sets
Proceedings of the 10th annual conference companion on Genetic and evolutionary computation
Inactive learning?: difficulties employing active learning in practice
ACM SIGKDD Explorations Newsletter
Hi-index | 0.00 |
Many approaches to active learning involve training one classifier by periodically choosing new data points about which the classifier has the least confidence, but designing a confidence measure without bias is nontrivial. An alternative approach is to train an ensemble of classifiers by periodically choosing data points that cause maximal disagreement among them. Many classifiers with different underlying structures could fit this framework, but some classifiers are more suitable for different data sets than others. The question then arises as to how to find the most suitable classifier for a given data set. In this work, an evolutionary algorithm is proposed to address this problem. The algorithm starts with a combination of artificial neural networks and decision trees, and iteratively adapts the ratio of the classifier types according to a replacement strategy. Experiments with synthetic and real data sets show that when the algorithm considers both fitness and classifier type for replacement, the population becomes saturated with accurate instantiations of the more suitable classifier type. This allows the algorithm to perform consistently well across data sets, without having to determine a priori a suitable classifier type.