Combining Empirical Studies of Audio-Lingual and Visual-Facial Modalities for Emotion Recognition

  • Authors:
  • M. Virvou;G. A. Tsihrintzis;E. Alepis;I. -O. Stathopoulou;K. Kabassi

  • Affiliations:
  • Department of Informatics, University of Piraeus, Piraeus 185 34, Greece;Department of Informatics, University of Piraeus, Piraeus 185 34, Greece;Department of Informatics, University of Piraeus, Piraeus 185 34, Greece;Department of Informatics, University of Piraeus, Piraeus 185 34, Greece;Department of Informatics, University of Piraeus, Piraeus 185 34, Greece

  • Venue:
  • KES '07 Knowledge-Based Intelligent Information and Engineering Systems and the XVII Italian Workshop on Neural Networks on Proceedings of the 11th International Conference
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present and discuss two empirical studies that we have conducted involving human subjects and human observers concerning the recognition of emotions from audio-lingual and visual-facial modalities. Many researchers agree that these modalities are complementary to each other and that the combination of the two can improve the accuracy in affective user models. However, there is a shortage of research in empirical work concerning the strengths and weaknesses of each modality so that more accurate recognizers can be built. In our research, we have investigated the recognition of emotions from the above mentioned modalities with respect to 6 basic emotional states, namely happiness,sadness, surprise, angerand disgustas well as the emotionless state which we refer to as neutral. We have found that certain states such as neutral happiness and surprise are more clearly recognized from the visual-facial modality whereas sadness and disgust are more clearly recognized from the audio-lingual modality.