Journal of Optimization Theory and Applications
A generic platform for addressing the multimodal challenge
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
MAUI: a multimodal affective user interface
Proceedings of the tenth ACM international conference on Multimedia
Temporal Classification of Natural Gesture and Application to Video Coding
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Spotting Segments Displaying Facial Expression from Image Sequences Using HMM
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
To feel or not to feel: the role of affect in human-computer interaction
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Facial expression recognition from video sequences: temporal and static modeling
Computer Vision and Image Understanding - Special issue on Face recognition
Probabilistic Combination of Multiple Modalities to Detect Interest
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 10 - Volume 10
Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences
IEEE Transactions on Pattern Analysis and Machine Intelligence
Handbook of Face Recognition
Design and evaluation of expressive gesture synthesis for embodied conversational agents
Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems
Deception Detection through Automatic, Unobtrusive Analysis of Nonverbal Behavior
IEEE Intelligent Systems
Multimodal affect recognition in learning environments
Proceedings of the 13th annual ACM international conference on Multimedia
Fully Automatic Facial Action Recognition in Spontaneous Behavior
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
One-Class Classification for Spontaneous Facial Expression Analysis
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Tracking Using Dynamic Programming for Appearance-Based Sign Language Recognition
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Fully Automatic Facial Action Unit Detection and Temporal Analysis
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Multicue HMM-UKF for Real-Time Contour Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Emotion Recognition Based on Joint Visual and Audio Cues
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 01
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 01
Toward multimodal fusion of affective cues
Proceedings of the 1st ACM international workshop on Human-centered multimedia
Observer annotation of affective display and evaluation of expressivity: face vs. face-and-body
VisHCI '06 Proceedings of the HCSNet workshop on Use of vision in human-computer interaction - Volume 56
Bi-modal emotion recognition from expressive face and body gestures
Journal of Network and Computer Applications
Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships
IEEE Transactions on Pattern Analysis and Machine Intelligence
How to distinguish posed from spontaneous smiles using geometric features
Proceedings of the 9th international conference on Multimodal interfaces
Human computing and machine understanding of human behavior: a survey
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Modeling naturalistic affective states via facial, vocal, and bodily expressions recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
From facial expression to level of interest: a spatio-temporal approach
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
An automated face reader for fatigue detection
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Authentic facial expression analysis
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Learning dynamics for exemplar-based gesture recognition
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Emotion analysis in man-machine interaction systems
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
A tutorial on particle filters for online nonlinear/non-GaussianBayesian tracking
IEEE Transactions on Signal Processing
A fused hidden Markov model with application to bimodal speech processing
IEEE Transactions on Signal Processing
Multimodal integration-a statistical view
IEEE Transactions on Multimedia
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities
Proceedings of the 2009 international conference on Multimodal interfaces
Guest editorial: special issue on human computing
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Unobtrusive Sensing of Emotions (USE)
Journal of Ambient Intelligence and Smart Environments
Recognizing affect from non-stylized body motion using shape of Gaussian descriptors
Proceedings of the 2010 ACM Symposium on Applied Computing
Recognition of affect based on gait patterns
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on gait analysis
Automatic analysis of affective postures and body motion to detect engagement with a game companion
Proceedings of the 6th international conference on Human-robot interaction
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Human face analysis: from identity to emotion and intention recognition
ICEB'10 Proceedings of the Third international conference on Ethics and Policy of Biometrics and International Data Sharing
Egocentric interaction as a tool for designing ambient ecologies-The case of the easy ADL ecology
Pervasive and Mobile Computing
Unobtrusive Sensing of Emotions (USE)
Journal of Ambient Intelligence and Smart Environments
Proceedings of the 14th ACM international conference on Multimodal interaction
Facial emotion recognition with expression energy
Proceedings of the 14th ACM international conference on Multimodal interaction
What Does Touch Tell Us about Emotions in Touchscreen-Based Gameplay?
ACM Transactions on Computer-Human Interaction (TOCHI)
Recognizing actions using depth motion maps-based histograms of oriented gradients
Proceedings of the 20th ACM international conference on Multimedia
Ubiquitous emotion-aware computing
Personal and Ubiquitous Computing
Image and Vision Computing
Image and Vision Computing
Upper body pose recognition and classifier
Proceedings of the 5th ACM COMPUTE Conference: Intelligent & scalable system technologies
Natural interaction expressivity modeling and analysis
Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments
Discriminative functional analysis of human movements
Pattern Recognition Letters
Affective command-based control system integrating brain signals in commands control systems
Computers in Human Behavior
Effective 3D action recognition using EigenJoints
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
Psychologists have long explored mechanisms with which humans recognize other humans' affective states from modalities, such as voice and face display. This exploration has led to the identification of the main mechanisms, including the important role played in the recognition process by the modalities' dynamics. Constrained by the human physiology, the temporal evolution of a modality appears to be well approximated by a sequence of temporal segments called onset, apex, and offset. Stemming from these findings, computer scientists, over the past 15 years, have proposed various methodologies to automate the recognition process. We note, however, two main limitations to date. The first is that much of the past research has focused on affect recognition from single modalities. The second is that even the few multimodal systems have not paid sufficient attention to the modalities' dynamics: The automatic determination of their temporal segments, their synchronization to the purpose of modality fusion, and their role in affect recognition are yet to be adequately explored. To address this issue, this paper focuses on affective face and body display, proposes a method to automatically detect their temporal segments or phases, explores whether the detection of the temporal phases can effectively support recognition of affective states, and recognizes affective states based on phase synchronization/alignment. The experimental results obtained show the following: 1) affective face and body displays are simultaneous but not strictly synchronous; 2) explicit detection of the temporal phases can improve the accuracy of affect recognition; 3) recognition from fused face and body modalities performs better than that from the face or the body modality alone; and 4) synchronized feature-level fusion achieves better performance than decision-level fusion.