Multiple classifier systems for the recogonition of human emotions

  • Authors:
  • Friedhelm Schwenker;Stefan Scherer;Miriam Schmidt;Martin Schels;Michael Glodek

  • Affiliations:
  • Institute of Neural Information Processing, University of Ulm, Ulm;Institute of Neural Information Processing, University of Ulm, Ulm;Institute of Neural Information Processing, University of Ulm, Ulm;Institute of Neural Information Processing, University of Ulm, Ulm;Institute of Neural Information Processing, University of Ulm, Ulm

  • Venue:
  • MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Research in the area of human-computer interaction (HCI) increasingly addressed the aspect of integrating some type of emotional intelligence in the system. Such systems must be able to recognize, interprete and create emotions. Although, human emotions are expressed through different modalities such as speech, facial expressions, hand or body gestures, most of the research in affective computing has been done in unimodal emotion recognition. Basically, a multimodal approach to emotion recognition should be more accurate and robust against missing or noisy data. We consider multiple classifier systems in this study for the classification of facial expressions, and additionally present a prototype of an audio-visual laughter detection system. Finally, a novel implementation of a Java process engine for pattern recognition and information fusion is described.