Affective computing
Automatic Analysis of Facial Expressions: The State of the Art
IEEE Transactions on Pattern Analysis and Machine Intelligence
Toward Machine Emotional Intelligence: Analysis of Affective Physiological State
IEEE Transactions on Pattern Analysis and Machine Intelligence - Graph Algorithms and Computer Vision
Detecting Faces in Images: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Detection in Color Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Automatic recognition of facial expressions using hidden markov models and estimation of expression intensity
Joint processing of audio-visual information for the recognition of emotional expressions in human-computer interaction
SVM-based Nonparametric Discriminant Analysis, An Application to Face Detection
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Toward computers that recognize and respond to user emotion
IBM Systems Journal
Handbook Of Pattern Recognition And Computer Vision
Handbook Of Pattern Recognition And Computer Vision
Affective multimodal human-computer interaction
Proceedings of the 13th annual ACM international conference on Multimedia
Human-Centered Multimedia: Culture, Deployment, and Access
IEEE MultiMedia
Neural Networks - Special issue: Emotion and brain
Parameterized facial expression synthesis based on MPEG-4
EURASIP Journal on Applied Signal Processing
Locating and extracting the eye in human face images
Pattern Recognition
Multi-stream confidence analysis for audio-visual affect recognition
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
A survey of affect recognition methods: audio, visual and spontaneous expressions
Proceedings of the 9th international conference on Multimodal interfaces
Emotion recognition using facial expressions with active appearance models
HCI '08 Proceedings of the Third IASTED International Conference on Human Computer Interaction
Affective intelligence: the human face of AI
Artificial intelligence
User Modeling and User-Adapted Interaction
Audio-visual spontaneous emotion recognition
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Gaze-X: adaptive, affective, multimodal interface for single-user office scenarios
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
Emotion recognition using bimodal data fusion
Proceedings of the 12th International Conference on Computer Systems and Technologies
Fusion of audio- and visual cues for real-life emotional human robot interaction
DAGM'11 Proceedings of the 33rd international conference on Pattern recognition
Culture and facial expressions: a case study with a speech interface
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part II
A multitask approach to continuous five-dimensional affect sensing in natural speech
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special Issue on Affective Interaction in Natural Environments
Output-associative RVM regression for dimensional and continuous emotion prediction
Image and Vision Computing
Do you care if a computer says sorry?: user experience design through affective messages
Proceedings of the Designing Interactive Systems Conference
A dynamic tonal perception model for optimal pitch stylization
Computer Speech and Language
Image and Vision Computing
Hi-index | 0.00 |
Affective and human-centered computing are two areas related to HCI which have attracted attention during the past years. One of the reasons that this may be attributed to, is the plethora of devices able to record and process multimodal input from the part of the users and adapt their functionality to their preferences or individual habits, thus enhancing usability and becoming attractive to users less accustomed with conventional interfaces. In the quest to receive feedback from the users in an unobtrusive manner, the visual and auditory modalities allow us to infer the users' emotional state, combining information both from facial expression recognition and speech prosody feature extraction. In this paper, we describe a multi-cue, dynamic approach in naturalistic video sequences. Contrary to strictly controlled recording conditions of audiovisual material, the current research focuses on sequences taken from nearly real world situations. Recognition is performed via a 'Simple Recurrent Network' which lends itself well to modeling dynamic events in both user's facial expressions and speech. Moreover this approach differs from existing work in that it models user expressivity using a dimensional representation of activation and valence, instead of detecting the usual 'universal emotions' which are scarce in everyday human-machine interaction. The algorithm is deployed on an audiovisual database which was recorded simulating human-human discourse and, therefore, contains less extreme expressivity and subtle variations of a number of emotion labels.