MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Audio-visual emotion recognition in adult attachment interview
Proceedings of the 8th international conference on Multimodal interfaces
A survey of affect recognition methods: audio, visual and spontaneous expressions
Proceedings of the 9th international conference on Multimodal interfaces
MIST: distributed indexing and querying in sensor networks using statistical models
VLDB '07 Proceedings of the 33rd international conference on Very large data bases
A robust multimodal approach for emotion recognition
Neurocomputing
Realistic visual speech synthesis based on hybrid concatenation method
IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
Applying Affect Recognition in Serious Games: The PlayMancer Project
MIG '09 Proceedings of the 2nd International Workshop on Motion in Games
Audio-visual spontaneous emotion recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Multi-stream confidence analysis for audio-visual affect recognition
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Detecting DDoS attacks based on multi-stream fused HMM in source-end network
CANS'06 Proceedings of the 5th international conference on Cryptology and Network Security
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Hi-index | 0.00 |
Advances in computer processing power and emerging algorithms are allowing new ways of envisioning Human Computer Interaction. This paper focuses on the development of a computing algorithm that uses audio and visual sensors to detect and track a user's affective state to aid computer decision making. Using our Multi-stream Fused Hidden Markov Model (MFHMM), we analyzed coupled audio and visual streams to detect 11 cognitive/emotive states. The MFHMM allows the building of an optimal connection among multiple streams according to the maximum entropy principle and the maximum mutual information criterion. Person-independent experimental results from 20 subjects in 660 sequences show that the MFHMM approach performs with an accuracy of 80.61% which outperforms face-only HMM, pitch-only HMM, energy-only HMM, and independent HMM fusion.