Multimodal Human Emotion/Expression Recognition
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Affective computing: challenges
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Visual Prosody: Facial Movements Accompanying Speech
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Analysis of emotion recognition using facial expressions, speech and multimodal information
Proceedings of the 6th international conference on Multimodal interfaces
Affective Student Modeling Based on Microphone and Keyboard User Actions
ICALT '06 Proceedings of the Sixth IEEE International Conference on Advanced Learning Technologies
NEU-FACES: A Neural Network-Based Face Image Analysis System
ICANNGA '07 Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part II
Facial expression classification: specifying requirements for an automated system
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
Visual affect recognition
Object oriented architecture for affective multimodal e-learning interfaces
Intelligent Decision Technologies
Automatic generation of emotions in tutoring agents for affective e-learning in medical education
Expert Systems with Applications: An International Journal
Multimodal object oriented user interfaces in mobile affective interaction
Multimedia Tools and Applications
Hi-index | 0.00 |
In this paper, we investigate the possibility of improving the accuracy of visual-facial emotion recognition through use of additional (complementary)information. The investigation is based on three empirical studies that we have conducted involving human subjects and human observers. The studies were concerned with the recognition of emotions from a visual-facial modality, audio-lingual and keyboard-stroke information, respectively. They were inspired by the relative shortage of such previous research in empirical work concerning the strengths and weaknesses of each modality so that the extent can be determined to which the keyboard-stroke and audio-lingual information complements and improves the emotion recognition accuracy of the visual-facial modality. Specifically, our research focused on the recognition of six basic emotion states, namely happiness, sadness, surprise, anger and disgust as well as the emotionless state which we refer to as neutral. We have found that the visual-facial modality may allow the recognition of certain states, such as neutral and surprise, with sufficient accuracy. However, its accuracy in recognizing anger and disgust can be improved significantly if assisted by keyboard-stroke information.