Sensitive error correcting output codes

  • Authors:
  • John Langford;Alina Beygelzimer

  • Affiliations:
  • Toyata Technologial institute, Chicago, IL;IBM T.J.Watson Research center, Hawthorne, NY

  • Venue:
  • COLT'05 Proceedings of the 18th annual conference on Learning Theory
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

We present a reduction from cost-sensitive classification to binary classification based on (a modification of) error correcting output codes. The reduction satisfies the property that ε regret for binary classification implies l2-regret of at most 2ε for cost estimation. This has several implications: Any regret-minimizing online algorithm for 0/1 loss is (via the reduction) a regret-minimizing online cost-sensitive algorithm. In particular, this means that online learning can be made to work for arbitrary (i.e., totally unstructured) loss functions. The output of the reduction can be thresholded so that ε regret for binary classification implies at most 4${\sqrt{\epsilon Z}}$ regret for cost-sensitive classification where Z is the expected sum of costs. For multiclass problems, ε binary regret translates into l2-regret of at most 4ε in the estimation of class probabilities. For classification, this implies at most 4${\sqrt{\epsilon}}$ multiclass regret.