Speaker state classification based on fusion of asymmetric simple partial least squares (SIMPLS) and support vector machines

  • Authors:
  • Dong-Yan Huang;Zhengchen Zhang;Shuzhi Sam Ge

  • Affiliations:
  • -;-;-

  • Venue:
  • Computer Speech and Language
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents our studies of the effects of acoustic features, speaker normalization methods, and statistical modeling techniques on speaker state classification. We focus on the investigation of the effect of simple partial least squares (SIMPLS) in unbalanced binary classification. Beyond dimension reduction and low computational complexity, SIMPLS classifier (SIMPLSC) shows, especially, higher prediction accuracy to the class with the smaller data number. Therefore, an asymmetric SIMPLS classifier (ASIMPLSC) is proposed to enhance the performance of SIMPLSC to the class with the larger data number. Furthermore, we combine multiple system outputs (ASIMPLS classifier and Support Vector Machines) by score-level fusion to exploit the complementary information in diverse systems. The proposed speaker state classification system is evaluated with several experiments on unbalanced data sets. Within the Interspeech 2011 Speaker State Challenge, we could achieve the best results for the 2-class task of the Sleepiness Sub-Challenge with an unweighted average recall of 71.7%. Further experimental results on the SEMAINE data sets show that the ASIMPLSC achieves an absolute improvement of 6.1%, 6.1%, 24.5%, and 1.3% on the weighted average recall value, over the AVEC 2011 baseline system on the emotional speech binary classification tasks of four dimensions, namely, activation, expectation, power, and valence, respectively.