Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
A Theoretical Study on Six Classifier Fusion Strategies
IEEE Transactions on Pattern Analysis and Machine Intelligence
Detecting Faces in Images: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
To feel or not to feel: the role of affect in human-computer interaction
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Probabilistic Combination of Multiple Modalities to Detect Interest
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Affective multimodal human-computer interaction
Proceedings of the 13th annual ACM international conference on Multimedia
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 01
Emotion analysis in man-machine interaction systems
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
Multimodal integration-a statistical view
IEEE Transactions on Multimedia
Visual inference of human emotion and behaviour
Proceedings of the 9th international conference on Multimodal interfaces
An emotionally intelligent user interface: modelling emotion for user engagement
OZCHI '07 Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces
Recognising Human Emotions from Body Movement and Gesture Dynamics
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech
Affect and Emotion in Human-Computer Interaction
Evidence Theory-Based Multimodal Emotion Recognition
MMM '09 Proceedings of the 15th International Multimedia Modeling Conference on Advances in Multimedia Modeling
RoboCup as a spectator sport: simulating emotional response in the four-legged league
IE '08 Proceedings of the 5th Australasian Conference on Interactive Entertainment
Combining Facial and Postural Expressions of Emotions in a Virtual Character
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Social signal processing: Survey of an emerging domain
Image and Vision Computing
Automatic temporal segment detection and affect recognition from face and body display
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Detecting Emotions from Connected Action Sequences
IVIC '09 Proceedings of the 1st International Visual Informatics Conference on Visual Informatics: Bridging Research and Practice
Visual affect recognition
Subjective difficulty estimation for interactive learning by sensing vibration sound on desk panel
AmI'10 Proceedings of the First international joint conference on Ambient intelligence
Researching emotion: challenges and solutions
Proceedings of the 2011 iConference
Sentic avatar: multimodal affective conversational agent with common sense
Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues
Automatic speech emotion recognition using modulation spectral features
Speech Communication
Information Processing and Management: an International Journal
A computational model based on Gross' emotion regulation theory
Cognitive Systems Research
Towards IMACA: intelligent multimodal affective conversational agent
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part I
Diagnosis of depression by behavioural signals: a multimodal approach
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge
Multimodal Approach for Emotion Recognition Using a Formal Computational Model
International Journal of Applied Evolutionary Computation
Hi-index | 0.00 |
Psychological research findings suggest that humans rely on the combined visual channels of face and body more than any other channel when they make judgments about human communicative behavior. However, most of the existing systems attempting to analyze the human nonverbal behavior are mono-modal and focus only on the face. Research that aims to integrate gestures as an expression mean has only recently emerged. Accordingly, this paper presents an approach to automatic visual recognition of expressive face and upper-body gestures from video sequences suitable for use in a vision-based affective multi-modal framework. Face and body movements are captured simultaneously using two separate cameras. For each video sequence single expressive frames both from face and body are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming classification using the individual facial or bodily modality alone.