Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
2005 Special Issue: Challenges in real-life emotion annotation and machine learning based detection
Neural Networks - Special issue: Emotion and brain
A Fast Biologically Inspired Algorithm for Recurrent Motion Estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Real-Time Emotion Recognition from Speech Using Echo State Networks
ANNPR '08 Proceedings of the 3rd IAPR workshop on Artificial Neural Networks in Pattern Recognition
Orientation histograms for face recognition
ANNPR'06 Proceedings of the Second international conference on Artificial Neural Networks in Pattern Recognition
Multiple classifier systems for the classificatio of audio-visual emotional states
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part II
Paralinguistics in speech and language-State-of-the-art and the challenge
Computer Speech and Language
Ten recent trends in computational paralinguistics
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
Research in the area of human-computer interaction (HCI) increasingly addressed the aspect of integrating some type of emotional intelligence in the system. Such systems must be able to recognize, interprete and create emotions. Although, human emotions are expressed through different modalities such as speech, facial expressions, hand or body gestures, most of the research in affective computing has been done in unimodal emotion recognition. Basically, a multimodal approach to emotion recognition should be more accurate and robust against missing or noisy data. We consider multiple classifier systems in this study for the classification of facial expressions, and additionally present a prototype of an audio-visual laughter detection system. Finally, a novel implementation of a Java process engine for pattern recognition and information fusion is described.