Coding, Analysis, Interpretation, and Recognition of Facial Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Communicating facial affect: it's not the realism, it's the motion
CHI '00 Extended Abstracts on Human Factors in Computing Systems
Facial Expression Recognition and Its Degree Estimation
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Subtly Different Facial Expression Recognition and Expression Intensity Estimation
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Emotion Recognition Using a Cauchy Naive Bayes Classifier
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 1 - Volume 1
Real time facial expression recognition in video using support vector machines
Proceedings of the 5th international conference on Multimodal interfaces
Categorical imperative NOT: facial affect is perceived continuously
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Handbook of Face Recognition
Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship
Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Presence: Teleoperators and Virtual Environments
Perceiving visual emotions with speech
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Potential and Challenges of Body Area Networks for Affective Human Computer Interaction
FAC '09 Proceedings of the 5th International Conference on Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience: Held as Part of HCI International 2009
Unobtrusive Sensing of Emotions (USE)
Journal of Ambient Intelligence and Smart Environments
Analysis of head and facial gestures using facial landmark trajectories
BioID_MultiComm'09 Proceedings of the 2009 joint COST 2101 and 2102 international conference on Biometric ID management and multimodal communication
Spatiotemporal-boosted DCT features for head and face gesture analysis
HBU'10 Proceedings of the First international conference on Human behavior understanding
Modeling of operators' emotion and task performance in a virtual driving environment
International Journal of Human-Computer Studies
Affect prediction from physiological measures via visual stimuli
International Journal of Human-Computer Studies
Unobtrusive Sensing of Emotions (USE)
Journal of Ambient Intelligence and Smart Environments
Ubiquitous emotion-aware computing
Personal and Ubiquitous Computing
Subject-dependent biosignal features for increased accuracy in psychological stress detection
International Journal of Human-Computer Studies
Hybrid method based on topography for robust detection of iris center and eye corners
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Affective and cognitive design for mass personalization: status and prospect
Journal of Intelligent Manufacturing
Hi-index | 0.00 |
We present automated, real-time models built with machine learning algorithms which use videotapes of subjects' faces in conjunction with physiological measurements to predict rated emotion (trained coders' second-by-second assessments of sadness or amusement). Input consisted of videotapes of 41 subjects watching emotionally evocative films along with measures of their cardiovascular activity, somatic activity, and electrodermal responding. We built algorithms based on extracted points from the subjects' faces as well as their physiological responses. Strengths of the current approach are (1) we are assessing real behavior of subjects watching emotional videos instead of actors making facial poses, (2) the training data allow us to predict both emotion type (amusement versus sadness) as well as the intensity level of each emotion, (3) we provide a direct comparison between person-specific, gender-specific, and general models. Results demonstrated good fits for the models overall, with better performance for emotion categories than for emotion intensity, for amusement ratings than sadness ratings, for a full model using both physiological measures and facial tracking than for either cue alone, and for person-specific models than for gender-specific or general models.