Grounding emotions in human-machine conversational systems

  • Authors:
  • Giuseppe Riccardi;Dilek Hakkani-Tür

  • Affiliations:
  • AT&T Labs–Research, Florham Park, New Jersey;AT&T Labs–Research, Florham Park, New Jersey

  • Venue:
  • INTETAIN'05 Proceedings of the First international conference on Intelligent Technologies for Interactive Entertainment
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we investigate the role of user emotions in human-machine goal-oriented conversations. There has been a growing interest in predicting emotions from acted and non-acted spontaneous speech. Much of the research work has gone in determining what are the correct labels and improving emotion prediction accuracy. In this paper we evaluate the value of user emotional state towards a computational model of emotion processing. We consider a binary representation of emotions (positive vs. negative) in the context of a goal-driven conversational system. For each human-machine interaction we acquire the temporal emotion sequence going from the initial to the final conversational state. These traces are used as features to characterize the user state dynamics. We ground the emotion traces by associating its patterns to dialog strategies and their effectiveness. In order to quantify the value of emotion indicators, we evaluate their predictions in terms of speech recognition and spoken language understanding errors as well as task success or failure. We report results on the 11.5K dialog corpus samples from the How may I Help You? corpus.