Annotation of Emotion in Dialogue: The Emotion in Cooperation Project

  • Authors:
  • Federica Cavicchio;Massimo Poesio

  • Affiliations:
  • CIMeC, Università degli Studi di Trento, Rovereto (Tn), Italy 38068;CIMeC, Università degli Studi di Trento, Rovereto (Tn), Italy 38068

  • Venue:
  • PIT '08 Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this research we investigate the relationship between emotion and cooperation in dialogue tasks. It is an area were still many unsolved questions are present. One of the main open issues is the labeling of "blended" emotions and their recognition. Usually there is a low agreement among raters in labeling and naming emotions and surprisingly emotion recognition is higher in a condition of modality deprivation (only acoustic or only visual vs. bimodal). Because of this previous results we don't ask raters to directly label emotions, but to use a small set of features (as lips or eyebrows shape) to annotate our corpus. The analyzed materials come from an audiovisual corpus of Map Task dialogues elicited with a script. We point out the "emotive" tokens by simultaneous recordings of the phsychophysiological indexes (ElectroCardioGram ECG, Galvanic Skin Conductance GSC, ElectroMyoGraphy EMG). After this selection we annotate each token with our multimodal annotation scheme. Each annotation will lead to a cluster of signals identifying the emotion corresponding to a cooperative/non cooperative level; the last step involves agreement among coders and reliability of the emotion description. Future research will deal with brain imaging experiment on the effect of putting emotions into words and the role of context in emotion recognition.