Multi-score learning for affect recognition: the case of body postures
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Proceedings of the 14th ACM international conference on Multimodal interaction
Proceedings of the 2013 international conference on Intelligent user interfaces
Fusion of fragmentary classifier decisions for affective state recognition
MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction
Proceedings of the 21st ACM international conference on Multimedia
Hi-index | 0.00 |
The study at hand aims at the development of a multimodal, ensemble-based system for emotion recognition. Special attention is given to a problem often neglected: missing data in one or more modalities. In offline evaluation the issue can be easily solved by excluding those parts of the corpus where one or more channels are corrupted or not suitable for evaluation. In real applications, however, we cannot neglect the challenge of missing data and have to find adequate ways to handle it. To address this, we do not expect examined data to be completely available at all time in our experiments. The presented system solves the problem at the multimodal fusion stage, so various ensemble techniques—covering established ones as well as rather novel emotion specific approaches—will be explained and enriched with strategies on how to compensate for temporarily unavailable modalities. We will compare and discuss advantages and drawbacks of fusion categories and extensive evaluation of mentioned techniques is carried out on the CALLAS Expressivity Corpus, featuring facial, vocal, and gestural modalities.