New Measure of Classifier Dependency in Multiple Classifier Systems

  • Authors:
  • Dymitr Ruta;Bogdan Gabrys

  • Affiliations:
  • -;-

  • Venue:
  • MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
  • Year:
  • 2002

Quantified Score

Hi-index 0.01

Visualization

Abstract

Recent findings in the domain of combining classifiers provide a surprising revision of the usefulness of diversity for modelling combined performance. Although there is a common agreement that a successful fusion system should be composed of accurate and diverse classifiers, experimental results show very weak correlations between various diversity measures and combining methods. Effectively neither the combined performance nor its improvement against mean classifier performance seem to be measurable in a consistent and well defined manner. At the same time the most successful diversity measures, barely regarded as measuring diversity, are based on measuring error coincidences and by doing so they move closer to the definitions of combined errors themselves. Following this trend we decided to use directly the combining error normalized within the derivable error limits as a measure of classifiers dependency. Taking into account its simplicity and representativeness we chose majority voting error for the construction of the measure. We examine this novel dependency measure for a number of real datasets and classifiers showing its ability to model combining improvements over an individual mean.