Bimodal person-dependent emotion recognition comparison of feature level and decision level information fusion

  • Authors:
  • Muharram Mansoorizadeh;Nasrollah M Charkari

  • Affiliations:
  • Tarbiat Modarres University, Tehran, Iran;Tarbiat Modarres University, Tehran, Iran

  • Venue:
  • Proceedings of the 1st international conference on PErvasive Technologies Related to Assistive Environments
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Modern HCI systems tend to build a new channel of human-computer interaction in which the human emotion is understandable by computers. Since Humans depict emotional behavior in combination of various modalities (e.g. facial expression, speech articulations); The HCI should reliably perceive emotional information from multiple channels. The goal of the paper is to propose an approach for combining emotion related information from speech and facial expression. Two fusion approaches (feature level and decision level) are presented and compared with experimental results. Results show that the decision level fusion performs better than the other systems.