IEEE Transactions on Computers
Data-driven emotion conversion in spoken English
Speech Communication
Application of Expressive Speech in TTS System with Cepstral Description
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
Emotional speech synthesis by XML file using interactive genetic algorithms
Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation
Realistic visual speech synthesis based on hybrid concatenation method
IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
Prosody modeling for mandarin exclamatory speech
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
IEEE Transactions on Audio, Speech, and Language Processing
Emotion recognition and conversion for mandarin speech
FSKD'09 Proceedings of the 6th international conference on Fuzzy systems and knowledge discovery - Volume 1
Harmonic model for female voice emotional synthesis
BioID_MultiComm'09 Proceedings of the 2009 joint COST 2101 and 2102 international conference on Biometric ID management and multimodal communication
Emotion conversion based on prosodic unit selection
IEEE Transactions on Audio, Speech, and Language Processing
Hierarchical prosody conversion using regression-based clustering for emotional speech synthesis
IEEE Transactions on Audio, Speech, and Language Processing
Microintonation analysis of emotional speech
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
Dynamic prosody modification using zero frequency filtered signal
International Journal of Speech Technology
Expressive speech synthesis: a review
International Journal of Speech Technology
Hi-index | 0.00 |
Emotion is an important element in expressive speech synthesis. Unlike traditional discrete emotion simulations, this paper attempts to synthesize emotional speech by using "strong", "medium", and "weak" classifications. This paper tests different models, a linear modification model (LMM), a Gaussian mixture model (GMM), and a classification and regression tree (CART) model. The linear modification model makes direct modification of sentence F0 contours and syllabic durations from acoustic distributions of emotional speech, such as, F0 topline, F0 baseline, durations, and intensities. Further analysis shows that emotional speech is also related to stress and linguistic information. Unlike the linear modification method, the GMM and CART models try to map the subtle prosody distributions between neutral and emotional speech. While the GMM just uses the features, the CART model integrates linguistic features into the mapping. A pitch target model which is optimized to describe Mandarin F0 contours is also introduced. For all conversion methods, a deviation of perceived expressiveness (DPE) measure is created to evaluate the expressiveness of the output speech. The results show that the LMM gives the worst results among the three methods. The GMM method is more suitable for a small training set, while the CART method gives the better emotional speech output if trained with a large context-balanced corpus. The methods discussed in this paper indicate ways to generate emotional speech in speech synthesis. The objective and subjective evaluation processes are also analyzed. These results support the use of a neutral semantic content text in databases for emotional speech synthesis