EmoEmma: emotional speech input for interactive storytelling

  • Authors:
  • Fred Charles;David Pizzi;Marc Cavazza;Thurid Vogt;Elisabeth André

  • Affiliations:
  • University of Teesside, Middlesbrough, United Kingdom;University of Teesside, Middlesbrough, United Kingdom;University of Teesside, Middlesbrough, United Kingdom;University of Augsburg, Augsburg, Germany;University of Augsburg, Augsburg, Germany

  • Venue:
  • Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Whilst techniques for narrative generation and agent behaviour have made significant progress in recent years, natural language processing remains a bottleneck hampering the scalability of Interactive Storytelling systems. This demonstrator introduces a novel interaction technique based solely on emotional speech recognition. It allows the user to use speech to interact with virtual actors without any constraints on style or expressivity, by mapping the recognised emotional categories to narrative situations and virtual characters feelings.