Affective computing
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Coupled hidden Markov models for complex action recognition
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Joint processing of audio-visual information for the recognition of emotional expressions in human-computer interaction
Bimodal HCI-related affect recognition
Proceedings of the 6th international conference on Multimodal interfaces
Audio-Visual Affect Recognition through Multi-Stream Fused HMM for HCI
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Audio-visual emotion recognition in adult attachment interview
Proceedings of the 8th international conference on Multimodal interfaces
A survey of affect recognition methods: audio, visual and spontaneous expressions
Proceedings of the 9th international conference on Multimodal interfaces
Human-Centred Intelligent Human Computer Interaction (HCI²): how far are we from attaining it?
International Journal of Autonomous and Adaptive Communications Systems
Evidence Theory-Based Multimodal Emotion Recognition
MMM '09 Proceedings of the 15th International Multimedia Modeling Conference on Advances in Multimedia Modeling
Audio-visual spontaneous emotion recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Multimodal retrieval with relevance feedback based on genetic programming
Multimedia Tools and Applications
Hi-index | 0.00 |
To simulate the human ability to assess affects, an automatic affect recognition system should make use of multi-sensor information. In the framework of multi-stream fused hidden Markov model (MFHMM), we present a training combination strategy towards audio-visual affect recognition. Different from the weighting combination scheme, our approach is able to use a variety of learning methods to obtain a robust multi-stream fusion result. We evaluate our approach in personal-independent recognition of 11 affective states from 20 subjects. The experimental results suggest that MFHMM outperforms IHMM which assumes the independence among streams, and the training combination strategy has the superiority over the weighting combination under clean and varying audio channel noise condition.