Analysis of emotion recognition using facial expressions, speech and multimodal information
Proceedings of the 6th international conference on Multimodal interfaces
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
Audio-visual emotion recognition in adult attachment interview
Proceedings of the 8th international conference on Multimodal interfaces
Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech
Affect and Emotion in Human-Computer Interaction
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evidence Theory-Based Multimodal Emotion Recognition
MMM '09 Proceedings of the 15th International Multimedia Modeling Conference on Advances in Multimedia Modeling
Automatic temporal segment detection and affect recognition from face and body display
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Evaluation and Discussion of Multi-modal Emotion Recognition
ICCEE '09 Proceedings of the 2009 Second International Conference on Computer and Electrical Engineering - Volume 01
User Modeling and User-Adapted Interaction
Modeling naturalistic affective states via facial, vocal, and bodily expressions recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing
Audio visual emotion recognition based on triple-stream dynamic bayesian network models
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Multiple classifier systems for the classificatio of audio-visual emotional states
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Recognizing Affect from Linguistic Information in 3D Continuous Space
IEEE Transactions on Affective Computing
Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data
IEEE Transactions on Affective Computing
Crawling to improve multimodal emotion detection
MICAI'11 Proceedings of the 10th international conference on Artificial Intelligence: advances in Soft Computing - Volume Part II
IEEE Transactions on Affective Computing
Emotion Assessment From Physiological Signals for Adaptation of Game Difficulty
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Error Weighted Semi-Coupled Hidden Markov Model for Audio-Visual Emotion Recognition
IEEE Transactions on Multimedia
Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification
IEEE Transactions on Affective Computing
Multimodal Emotion Recognition in Response to Videos
IEEE Transactions on Affective Computing
Diagnosis of depression by behavioural signals: a multimodal approach
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Knowledge Elicitation Methods for Affect Modelling in Education
International Journal of Artificial Intelligence in Education
Hi-index | 0.00 |
The recent influx of multimodal affect classifiers raises the important question of whether these classifiers yield accuracy rates that exceed their unimodal counterparts. This question was addressed by performing a meta-analysis on 30 published studies that reported both multimodal and unimodal affect detection accuracies. The results indicated that multimodal accuracies were consistently better than unimodal accuracies and yielded an average 8.12% improvement over the best unimodal classifiers. However, performance improvements were three times lower when classifiers were trained on natural or seminatural data (4.39% improvement) compared to acted data (12.1% improvement). Importantly, performance of the best unimodal classifier explained an impressive 80.6% (cross-validated) of the variance in multimodal accuracy. The results also indicated that multimodal accuracies were substantially higher than accuracies of the second-best unimodal classifiers (an average improvement of 29.4%) irrespective of the naturalness of the training data. Theoretical and applied implications of the findings are discussed.