Emotion recognition from speech via boosted Gaussian mixture models

  • Authors:
  • Hao Tang;Stephen M. Chu;Mark Hasegawa-Johnson;Thomas S. Huang

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, I.L.;IBM T. J. Watson Research Center, Yorktown Heights, N.Y.;Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, I.L.;Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, I.L.

  • Venue:
  • ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Gaussian mixture models (GMMs) and the minimum error rate classifier (i.e. Bayesian optimal classifier) are popular and effective tools for speech emotion recognition. Typically, GMMs are used to model the class-conditional distributions of acoustic features and their parameters are estimated by the expectation maximization (EM) algorithm based on a training data set. Then, classification is performed to minimize the classification error w.r.t. the estimated class-conditional distributions. We call this method the EM-GMM algorithm. In this paper, we introduce a boosting algorithm for reliably and accurately estimating the class-conditional GMMs. The resulting algorithm is named the Boosted-GMM algorithm. Our speech emotion recognition experiments show that the emotion recognition rates are effectively and significantly "boosted" by the Boosted-GMM algorithm as compared to the EM-GMM algorithm. This is due to the fact that the boosting algorithm can lead to more accurate estimates of the class-conditional GMMs, namely the class-conditional distributions of acoustic features.