Integrating articulatory features into HMM-based parametric speech synthesis

  • Authors:
  • Zhen-Hua Ling;Korin Richmond;Junichi Yamagishi;Ren-Hua Wang

  • Affiliations:
  • iFlytek Speech Lab, University of Science and Technology of China, Hefei, China;Center for Speech Technology Research, University of Edinburgh, Edinburgh, UK;Center for Speech Technology Research, University of Edinburgh, Edinburgh, UK;iFlytek Speech Lab, University of Science and Technology of China, Hefei, China

  • Venue:
  • IEEE Transactions on Audio, Speech, and Language Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an investigation into ways of integrating articulatory features into hidden Markov model (HMM)-based parametric speech synthesis. In broad terms, this may be achieved by estimating the joint distribution of acoustic and articulatory features during training. This may in turn be used in conjunction with a maximum-likelihood criterion to produce acoustic synthesis parameters for generating speech. Within this broad approach, we explore several variations that are possible in the construction of an HMM-based synthesis system which allow articulatory features to influence acoustic modeling: model clustering, state synchrony and cross-stream feature dependency. Performance is evaluated using the RMS error of generated acoustic parameters as well as formal listening tests. Our results show that the accuracy of acoustic parameter prediction and the naturalness of synthesized speech can be improved when shared clustering and asynchronous-state model structures are adopted for combined acoustic and articulatory features. Most significantly, however, our experiments demonstrate that modeling the dependency between these two feature streams can make speech synthesis systems more flexible. The characteristics of synthetic speech can be easily controlled by modifying generated articulatory features as part of the process of producing acoustic synthesis parameters.