Partial likelihood for estimation of multi-class posterior probabilities

  • Authors:
  • T. Adali;Hongmei Ni;Bo Wang

  • Affiliations:
  • Dept. of Comput. Sci. & Electr. Eng., Maryland Univ., Baltimore, MD, USA;-;-

  • Venue:
  • ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 02
  • Year:
  • 1999

Quantified Score

Hi-index 0.01

Visualization

Abstract

Partial likelihood (PL) provides a unified statistical framework for developing and studying adaptive techniques for nonlinear signal processing. In this paper, we present the general formulation for learning posterior probabilities on the PL cost for multi-class classifier design. We show that the fundamental information-theoretic relationship for learning on the PL cost, the equivalence of likelihood maximization and relative entropy minimization, is satisfied for the multiclass case for the perceptron probability model using softmax normalization. We note the inefficiency of training a softmax network and propose an efficient multiclass equalizer structure based on binary coding of the output classes. We show that the well-formed property of the PL cost is satisfied for the softmax and the new multiclass classifier. We present simulation results to demonstrate this fact and note that though the traditional mean square error (MSE) cost uses the available information more efficiently than the PL cost for the multi-class case, the new multi-class equalizer based on binary coding is much more effective in tracking abrupt changes due to the well-formed property of the cost that it uses.