Multimodal Human Emotion/Expression Recognition
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Affective computing: challenges
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Visual Prosody: Facial Movements Accompanying Speech
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Analysis of emotion recognition using facial expressions, speech and multimodal information
Proceedings of the 6th international conference on Multimodal interfaces
Affective Student Modeling Based on Microphone and Keyboard User Actions
ICALT '06 Proceedings of the Sixth IEEE International Conference on Advanced Learning Technologies
Visual affect recognition
Hi-index | 0.00 |
In this paper, we present and discuss three empirical studies that we have conducted involving human subjects and human observers concerning the recognition of emotions from audio-lingual visual-facial and keyboard-evidence modalities. Many researchers agree that these modalities are complementary to each other and that their combination can improve the accuracy in affective user models. However, there is a shortage of research in empirical work concerning the strengths and weaknesses of each modality so that more accurate recognizers can be built. In our research, we have investigated the recognition of emotions with respect to 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotionless state which we refer to as neutral. We have concluded that, in cases where a single modality may be deficient in providing emotion recognition evidence, the recognition process can be supported and complemented by the other modalities.