To feel or not to feel: the role of affect in human-computer interaction
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Affective multimodal human-computer interaction
Proceedings of the 13th annual ACM international conference on Multimedia
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
Fully Automatic Facial Action Recognition in Spontaneous Behavior
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Fully Automatic Facial Action Unit Detection and Temporal Analysis
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Spontaneous vs. posed facial behavior: automatic analysis of brow actions
Proceedings of the 8th international conference on Multimodal interfaces
Particle filtering with factorized likelihoods for tracking facial features
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Towards unsupervised detection of affective body posture nuances
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Fusion of audio and visual cues for laughter detection
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
Human-Centred Intelligent Human Computer Interaction (HCI²): how far are we from attaining it?
International Journal of Autonomous and Adaptive Communications Systems
Audiovisual laughter detection based on temporal features
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Social signal processing: state-of-the-art and future perspectives of an emerging domain
MM '08 Proceedings of the 16th ACM international conference on Multimedia
ACM Transactions on Accessible Computing (TACCESS)
Social signal processing: Survey of an emerging domain
Image and Vision Computing
Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities
Proceedings of the 2009 international conference on Multimodal interfaces
Proceedings of the 2009 international conference on Multimodal interfaces
Automatic temporal segment detection and affect recognition from face and body display
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Eyes do not lie: spontaneous versus posed smiles
Proceedings of the international conference on Multimedia
Implicit image tagging via facial information
Proceedings of the 2nd international workshop on Social signal processing
Identification of narrative peaks in video clips: text features perform best
CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
Sentic avatar: multimodal affective conversational agent with common sense
Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues
Are you friendly or just polite? - analysis of smiles in spontaneous face-to-face interactions
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
A multi-layer hybrid framework for dimensional emotion classification
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Spontaneous pain expression recognition in video sequences
VoCS'08 Proceedings of the 2008 international conference on Visions of Computer Science: BCS International Academic Conference
Static and dynamic 3D facial expression recognition: A comprehensive survey
Image and Vision Computing
Recognition of 3D facial expression dynamics
Image and Vision Computing
Facial expression recognition using geometric and appearance features
Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
Towards multimodal deception detection -- step 1: building a collection of deceptive videos
Proceedings of the 14th ACM international conference on Multimodal interaction
Are you really smiling at me? spontaneous versus posed enjoyment smiles
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Multi-view facial expression recognition analysis with generic sparse coding feature
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part III
Image and Vision Computing
Natural interaction expressivity modeling and analysis
Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments
Automatic detection of deceit in verbal communication
Proceedings of the 15th ACM on International conference on multimodal interaction
Facial expression recognition in dynamic sequences: An integrated approach
Pattern Recognition
Hi-index | 0.00 |
Automatic distinction between posed and spontaneous expressions is an unsolved problem. Previously cognitive sciences' studies indicated that the automatic separation of posed from spontaneous expressions is possible using the face modality alone. However, little is known about the information contained in head and shoulder motion. In this work, we propose to (i) distinguish between posed and spontaneous smiles by fusing the head, face, and shoulder modalities, (ii) investigate which modalities carry important information and how the information of the modalities relate to each other, and (iii) to which extent the temporal dynamics of these signals attribute to solving the problem. We use a cylindrical head tracker to track the head movements and two particle filtering techniques to track the facial and shoulder movements. Classification is performed by kernel methods combined with ensemble learning techniques. We investigated two aspects of multimodal fusion: the level of abstraction (i.e., early, mid-level, and late fusion) and the fusion rule used (i.e., sum, product and weight criteria). Experimental results from 100 videos displaying posed smiles and 102 videos displaying spontaneous smiles are presented. Best results were obtained with late fusion of all modalities when 94.0% of the videos were classified correctly.