Confidence Transformation for Combining Classifiers

  • Authors:
  • Cheng-Lin Liu;Hongwei Hao;Hiroshi Sako

  • Affiliations:
  • Hitachi, Central Research Laboratory, Japan;University of Science and Technology Beijing, Department of Computer Science, P.R. China;Hitachi, Central Research Laboratory, Japan

  • Venue:
  • Pattern Analysis & Applications
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper investigates a number of confidence transformation methods for measurement-level combination of classifiers. Each confidence transformation method is the combination of a scaling function and an activation function. The activation functions correspond to different types of confidences: likelihood (exponential), log-likelihood, sigmoid, and the evidence combination of sigmoid measures. The sigmoid and evidence measures serve as approximates to class probabilities. The scaling functions are derived by Gaussian density modeling, logistic regression with variable inputs, etc. We test the confidence transformation methods in handwritten digit recognition by combining variable sets of classifiers: neural classifiers only, distance classifiers only, strong classifiers, and mixed strong/weak classifiers. The results show that confidence transformation is efficient to improve the combination performance in all the settings. The normalization of class probabilities to unity of sum is shown to be detrimental to the combination performance. Comparing the scaling functions, the Gaussian method and the logistic regression perform well in most cases. Regarding the confidence types, the sigmoid and evidence measures perform well in most cases, and the evidence measure generally outperforms the sigmoid measure. We also show that the confidence transformation methods are highly robust to the validation sample size in parameter estimation.