Vocal communication of emotion: a review of research paradigms
Speech Communication - Special issue on speech and emotion
Applying an analysis of acted vocal emotions to improve the simulation of synthetic speech
Computer Speech and Language
Hi-index | 0.00 |
This project examines how the emotional content of a synthesised and robotic-sounding speech signal can be modified by manipulating high-level acoustic parameters using commonly available sound design digital tools. Stimuli were created on the basis of trends described by the literature and verified via our own analysis of emotional speech produced by actors. A listening test was run to verify whether the emotions expressed by the stimuli were discriminated by the listeners. Neutral and sad sentences were successfully identified. Happy sentences were identified with a lower degree of success, while angry sentences were, for the majority of cases, confused with happy. From the analysis of the test results and the stimuli we formulated hypotheses on why the identification of certain emotions was not successful and how this result could be improved in further work.