Designing the emotional content of a robotic speech signal

  • Authors:
  • Sandra Pauletto;Tristan Bowles

  • Affiliations:
  • The University of York, Heslington, York;The University of York, Heslington, York

  • Venue:
  • Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This project examines how the emotional content of a synthesised and robotic-sounding speech signal can be modified by manipulating high-level acoustic parameters using commonly available sound design digital tools. Stimuli were created on the basis of trends described by the literature and verified via our own analysis of emotional speech produced by actors. A listening test was run to verify whether the emotions expressed by the stimuli were discriminated by the listeners. Neutral and sad sentences were successfully identified. Happy sentences were identified with a lower degree of success, while angry sentences were, for the majority of cases, confused with happy. From the analysis of the test results and the stimuli we formulated hypotheses on why the identification of certain emotions was not successful and how this result could be improved in further work.