A Multimodal Database for Affect Recognition and Implicit Tagging

  • Authors:
  • Mohammad Soleymani;Jeroen Lichtenauer;Thierry Pun;Maja Pantic

  • Affiliations:
  • University of Geneva, Carouge;Imperial College, London;University of Geneva, Carouge;Imperial College, London

  • Venue:
  • IEEE Transactions on Affective Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

MAHNOB-HCI is a multimodal database recorded in response to affective stimuli with the goal of emotion recognition and implicit tagging research. A multimodal setup was arranged for synchronized recording of face videos, audio signals, eye gaze data, and peripheral/central nervous system physiological signals. Twenty-seven participants from both genders and different cultural backgrounds participated in two experiments. In the first experiment, they watched 20 emotional videos and self-reported their felt emotions using arousal, valence, dominance, and predictability as well as emotional keywords. In the second experiment, short videos and images were shown once without any tag and then with correct or incorrect tags. Agreement or disagreement with the displayed tags was assessed by the participants. The recorded videos and bodily responses were segmented and stored in a database. The database is made available to the academic community via a web-based system. The collected data were analyzed and single modality and modality fusion results for both emotion recognition and implicit tagging experiments are reported. These results show the potential uses of the recorded modalities and the significance of the emotion elicitation protocol.