Anchor model fusion for emotion recognition in speech

  • Authors:
  • Carlos Ortego-Resa;Ignacio Lopez-Moreno;Daniel Ramos;Joaquin Gonzalez-Rodriguez

  • Affiliations:
  • ATVS-Biometric Recognition Group, Universidad Autonoma de Madrid, Spain;ATVS-Biometric Recognition Group, Universidad Autonoma de Madrid, Spain;ATVS-Biometric Recognition Group, Universidad Autonoma de Madrid, Spain;ATVS-Biometric Recognition Group, Universidad Autonoma de Madrid, Spain

  • Venue:
  • BioID_MultiComm'09 Proceedings of the 2009 joint COST 2101 and 2102 international conference on Biometric ID management and multimodal communication
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work, a novel method for system fusion in emotion recognition for speech is presented. The proposed approach, namely Anchor Model Fusion (AMF), exploits the characteristic behaviour of the scores of a speech utterance among different emotion models, by a mapping to a back-end anchor-model feature space followed by a SVM classifier. Experiments are presented in three different databases: Ahumada III, with speech obtained from real forensic cases; and SUSAS Actual and SUSAS Simulated. Results comparing AMF with a simple sum-fusion scheme after normalization show a significant performance improvement of the proposed technique for two of the three experimental set-ups, without degrading performance in the third one.