The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
IEICE - Transactions on Information and Systems
Applying an analysis of acted vocal emotions to improve the simulation of synthetic speech
Computer Speech and Language
Voice quality conversion using interactive evolution of prosodic control
Applied Soft Computing
Expressing degree of activation in synthetic speech
IEEE Transactions on Audio, Speech, and Language Processing
Prosody conversion from neutral speech to emotional speech
IEEE Transactions on Audio, Speech, and Language Processing
Evaluating emotional algorithms using psychological scales
Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots
Hi-index | 0.00 |
As a technique that can "let computer speak", speech synthesis is drawing more and more attention. Today, much speech synthesis software can synthesize neutral speech naturally and knowingly. However, it is hard to make computers speak with "emotion" as that in our daily life, because of the complexity of emotion model. Interactive Genetic Algorithms which can be acted self-organizingly, adaptively and self-learningly can just resolve the problem of difficulty in modeling emotional speech synthesis. As a result, this paper designs an emotional speech synthesis process, which adjusts the parameters (XML-tags) used to synthesize emotional speech dynamically, using interactive Genetic Algorithms, to optimize the quality of emotional speech. Also, the paper includes an evaluation experiment, which proves the feasibility of the algorithms.