On Improving Visual-Facial Emotion Recognition with Audio-lingual and Keyboard Stroke Pattern Information

  • Authors:
  • George A. Tsihrintzis;Maria Virvou;Ioanna-Ourania Stathopoulou;Efthimios Alepis

  • Affiliations:
  • -;-;-;-

  • Venue:
  • WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 01
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we investigate the possibility of improving the accuracy of visual-facial emotion recognition through use of additional (complementary)information. The investigation is based on three empirical studies that we have conducted involving human subjects and human observers. The studies were concerned with the recognition of emotions from a visual-facial modality, audio-lingual and keyboard-stroke information, respectively. They were inspired by the relative shortage of such previous research in empirical work concerning the strengths and weaknesses of each modality so that the extent can be determined to which the keyboard-stroke and audio-lingual information complements and improves the emotion recognition accuracy of the visual-facial modality. Specifically, our research focused on the recognition of six basic emotion states, namely happiness, sadness, surprise, anger and disgust as well as the emotionless state which we refer to as neutral. We have found that the visual-facial modality may allow the recognition of certain states, such as neutral and surprise, with sufficient accuracy. However, its accuracy in recognizing anger and disgust can be improved significantly if assisted by keyboard-stroke information.