Emotion-State conversion for speaker recognition

  • Authors:
  • Dongdong Li;Yingchun Yang;Zhaohi Wu;Tian Wu

  • Affiliations:
  • Department of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China;Department of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China;Department of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China;Department of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China

  • Venue:
  • ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The performance of speaker recognition system is easily disturbed by the changes of the internal states of human. The ongoing work proposes an approach of speech emotion-state conversion to improve the performance of speaker identification system over various affective speech. The features of neutral speech are modified according to statistical prosodic parameters of emotion utterances. Speaker models are generated based on the converted speech. The experiments conducted on an emotion corpus with 14 emotion states shows promising results with an improved performance by 7.2%.