Modifying spectral envelope to synthetically adjust voice quality and articulation parameters for emotional speech synthesis

  • Authors:
  • Yanqiu Shao;Zhuoran Wang;Jiqing Han;Ting Liu

  • Affiliations:
  • School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China;School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China;School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China;School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China

  • Venue:
  • ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Both of the prosody and spectral features are important for emotional speech synthesis. Besides prosody effects, voice quality and articulation parameters are the factors that should be considered to modify in emotional speech synthetic systems. Generally, rules and filters are designed to process these parameters respectively. This paper proves that by modifying spectral envelope, the voice quality and articulation could be adjusted as a whole. Thus, it will not need to modify each of the parameter separately depending on rules. Accordingly, it will make the synthetic system more flexible by designing an automatic spectral envelope model based on some machine learning methods. The perception test in this paper also shows that when prosody and spectral features are all modified, the best emotional synthetic speech will be obtained.