Maximum likelihood discriminant feature spaces

  • Authors:
  • G. Saon;M. Padmanabhan;R. Gopinath;S. Chen

  • Affiliations:
  • IBM Thomas J. Watson Res. Center, Yorktown Heights, NY, USA;-;-;-

  • Venue:
  • ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 02
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. There has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal within-class covariance constraint. This paper presents a new approach to HDA by defining an objective function which maximizes the class discrimination in the projected subspace while ignoring the rejected dimensions. Moreover, we investigate the link between discrimination and the likelihood of the projected samples and show that HDA can be viewed as a constrained ML projection for a full covariance Gaussian model, the constraint being given by the maximization of the projected between-class scatter volume. It is shown that, under diagonal covariance Gaussian modeling constraints, applying a diagonalizing linear transformation (MLLT) to the HDA space results in increased classification accuracy even though HDA alone actually degrades the recognition performance. Experiments performed on the Switchboard and Voicemail databases show a 10%-13% relative improvement in the word error rate over standard cepstral processing.