Local estimation of posterior class probabilities to minimize classification errors

  • Authors:
  • A. Guerrero-Curieses;J. Cid-Sueiro;R. Alaiz-Rodriguez;A. R. Figueiras-Vidal

  • Affiliations:
  • Dept. de Teoria de la Senal y Comunicaciones, Univ. Carlos III de Madrid, Spain;-;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Decision theory shows that the optimal decision is a function of the posterior class probabilities. More specifically, in binary classification, the optimal decision is based on the comparison of the posterior probabilities with some threshold. Therefore, the most accurate estimates of the posterior probabilities are required near these decision thresholds. This paper discusses the design of objective functions that provide more accurate estimates of the probability values, taking into account the characteristics of each decision problem. We propose learning algorithms based on the stochastic gradient minimization of these loss functions. We show that the performance of the classifier is improved when these algorithms behave like sample selectors: samples near the decision boundary are the most relevant during learning.