Performance analysis of acoustic emotion recognition for in-car conversational interfaces

  • Authors:
  • Christian Martyn Jones;Ing-Marie Jonsson

  • Affiliations:
  • University of the Sunshine Coast, Queensland, Australia;Department of Communication, Stanford University, California

  • Venue:
  • UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The automotive industry are integrating more technologies into the standard new car kit. New cars often provide speech enabled communications such as voice-dial, as well as control over the car cockpit including entertainment systems, climate and satellite navigation. In addition there is the potential for a richer interaction between driver and car by automatically recognising the emotional state of the driver and responding intelligently and appropriately. Driver emotion and driving performance are often intrinsically linked and knowledge of the driver emotion can enable to the car to support the driving experience and encourage better driving. Automatically recognising driver emotion is a challenge and this paper presents a performance analysis of our in-car acoustic emotion recognition system.