Multimodal Approach for Emotion Recognition Using a Formal Computational Model

  • Authors:
  • Imen Tayari Meftah;Nhan Le Thanh;Chokri Ben Amar

  • Affiliations:
  • Wimmics, INRIA & University of Nice, Nice, France & REGIM Laboratory, University of Sfax, Sfax, Tunisia;Wimmics, INRIA & University of Nice, Nice, France;REGIM Laboratory, University of Sfax, Sfax, Tunisia

  • Venue:
  • International Journal of Applied Evolutionary Computation
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Emotions play a crucial role in human-computer interaction. They are generally expressed and perceived through multiple modalities such as speech, facial expressions, physiological signals. Indeed, the complexity of emotions makes the acquisition very difficult and makes unimodal systems i.e., the observation of only one source of emotion unreliable and often unfeasible in applications of high complexity. Moreover the lack of a standard in human emotions modeling hinders the sharing of affective information between applications. In this paper, the authors present a multimodal approach for the emotion recognition from many sources of information. This paper aims to provide a multi-modal system for emotion recognition and exchange that will facilitate inter-systems exchanges and improve the credibility of emotional interaction between users and computers. The authors elaborate a multimodal emotion recognition method from Physiological Data based on signal processing algorithms. The authors' method permits to recognize emotion composed of several aspects like simulated and masked emotions. This method uses a new multidimensional model to represent emotional states based on an algebraic representation. The experimental results show that the proposed multimodal emotion recognition method improves the recognition rates in comparison to the unimodal approach. Compared to the state of art multimodal techniques, the proposed method gives a good results with 72% of correct.