Exploiting multiple classifier types with active learning

  • Authors:
  • Zhenyu Lu;Josh Bongard

  • Affiliations:
  • University of Vermont, Burlington, VT, USA;University of Vermont, Burlington, VT, USA

  • Venue:
  • Proceedings of the 11th Annual conference on Genetic and evolutionary computation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many approaches to active learning involve training one classifier by periodically choosing new data points about which the classifier has the least confidence, but designing a confidence measure without bias is nontrivial. An alternative approach is to train an ensemble of classifiers by periodically choosing data points that cause maximal disagreement among them. Many classifiers with different underlying structures could fit this framework, but some classifiers are more suitable for different data sets than others. The question then arises as to how to find the most suitable classifier for a given data set. In this work, an evolutionary algorithm is proposed to address this problem. The algorithm starts with a combination of artificial neural networks and decision trees, and iteratively adapts the ratio of the classifier types according to a replacement strategy. Experiments with synthetic and real data sets show that when the algorithm considers both fitness and classifier type for replacement, the population becomes saturated with accurate instantiations of the more suitable classifier type. This allows the algorithm to perform consistently well across data sets, without having to determine a priori a suitable classifier type.