Affective computing
SVM-based Nonparametric Discriminant Analysis, An Application to Face Detection
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Neural Networks - Special issue: Emotion and brain
Parameterized facial expression synthesis based on MPEG-4
EURASIP Journal on Applied Signal Processing
IEEE Transactions on Neural Networks
Automatic emotional stimulus identification from facial expressions
SPPR'07 Proceedings of the Fourth conference on IASTED International Conference: Signal Processing, Pattern Recognition, and Applications
Semantic Adaptation of Neural Network Classifiers in Image Segmentation
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Automatic emotional stimulus identification from facial expressions
SPPRA '07 Proceedings of the Fourth IASTED International Conference on Signal Processing, Pattern Recognition, and Applications
eMotion: un outil pour personnaliser la reconnaissance d'émotions
Proceedings of the Ergonomie et Informatique Avancee Conference
A dynamic tonal perception model for optimal pitch stylization
Computer Speech and Language
Hi-index | 0.00 |
Emotions play a major role in human-to-human communication enabling people to express themselves beyond the verbal domain. In recent years, important advances have been made in unimodal speech and video emotion analysis where facial expression information and prosodic audio features are treated independently. The need however to combine the two modalities in a naturalistic context, where adaptation to specific human characteristics and expressivity is required, and where single modalities alone cannot provide satisfactory evidence, is clear. Appropriate neural network classifiers are proposed for multimodal emotion analysis in this paper, in an adaptive framework, which is able to activate retraining of each modality, whenever deterioration of the respective performance is detected. Results are presented based on the IST HUMAINE NoE naturalistic database; both facial expression information and prosodic audio features are extracted from the same data and feature-based emotion analysis is performed through the proposed adaptive neural network methodology.