Feature Weighted Ensemble Classifiers - A Modified Decision Scheme

  • Authors:
  • Thomas Martini Jørgensen;Christian Linneberg

  • Affiliations:
  • -;-

  • Venue:
  • MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In order to determine the output from an aggregated classifier a number of methods exists. A common approach is to apply the majority-voting scheme. If the performance of the classifiers can be ranked in some intelligent way, the voting process can be modified by assigning individual weights to each of the ensemble members. For some base classifiers, like decision trees, a given node or leaf is activated if the input lies within a well-defined region in input space. In other words, each leaf-node can be considered as defining a given feature in input space. In this paper, we present a method for adjusting the voting process of an ensemble by assigning individual weights to this set of features, implying that different nodes of the same decision tree can contribute differently to the overall voting process. By using a randomised "look-up technique" for the training examples the weights used in the decision process is determined using a perceptron-like learning rule. We present results obtained by applying such a technique to bagged ensembles of C4.5 trees and to the socalled PERT classifier, which is an ensemble of highly randomised decision trees. The proposed technique is compared to the majority-voting scheme on a number of data sets.