Describing the emotional states that are expressed in speech
Speech Communication - Special issue on speech and emotion
An empirical methodology for writing user-friendly natural language computer applications
CHI '83 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Audio-Visual Emotion Recognition Using Gaussian Mixture Models for Face and Voice
ISM '08 Proceedings of the 2008 Tenth IEEE International Symposium on Multimedia
Data Mining Spontaneous Facial Behavior with Automatic Expression Coding
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
Towards multimodal emotion recognition: a new approach
Proceedings of the ACM International Conference on Image and Video Retrieval
Computer Speech and Language
Interpreting hand-over-face gestures
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
AVEC 2011-the first international audio/visual emotion challenge
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data
IEEE Transactions on Affective Computing
Eulerian video magnification for revealing subtle changes in the world
ACM Transactions on Graphics (TOG) - SIGGRAPH 2012 Conference Proceedings
ICME '11 Proceedings of the 2011 IEEE International Conference on Multimedia and Expo
Audio–Visual Affective Expression Recognition Through Multistream Fused HMM
IEEE Transactions on Multimedia
Multimodal Emotion Recognition in Response to Videos
IEEE Transactions on Affective Computing
A companion technology for cognitive technical systems
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
Dempster-Shafer theory with smoothness
IUKM'13 Proceedings of the 2013 international conference on Integrated Uncertainty in Knowledge Modelling and Decision Making
Hi-index | 0.00 |
Real human-computer interaction systems based on different modalities face the problem that not all information channels are always available at regular time steps. Nevertheless an estimation of the current user state is required at anytime to enable the system to interact instantaneously based on the available modalities. A novel approach to decision fusion of fragmentary classifications is therefore proposed and empirically evaluated for audio and video signals of a corpus of non-acted user behavior. It is shown that visual and prosodic analysis successfully complement each other leading to an outstanding performance of the fusion architecture.