Static and dynamic 3D facial expression recognition: A comprehensive survey
Image and Vision Computing
Facial expression recognition using geometric and appearance features
Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
Kernel conditional ordinal random fields for temporal segmentation of facial action units
ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume 2
A novel LDA and HMM-Based technique for emotion recognition from facial expressions
MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction
Facial expression recognition in dynamic sequences: An integrated approach
Pattern Recognition
Image and Vision Computing
Exploiting Psychological Factors for Interaction Style Recognition in Spoken Conversation
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)
Hi-index | 0.00 |
Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.