Coding, Analysis, Interpretation, and Recognition of Facial Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognizing Action Units for Facial Expression Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Eigen-points: Control-point Location using Principle Component Analyses
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Recognizing Lower Face Action Units for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Real-Time, Fully Automatic Upper Facial Feature Tracking
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses
OZCHI '05 Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
Meticulously Detailed Eye Region Model and Its Application to Analysis of Facial Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Buddy bots: how turing's fast friends are undermining consumer privacy
Presence: Teleoperators and Virtual Environments - Special section: Legal, ethical, and policy issues associated with virtual environments and computer mediated reality
Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships
IEEE Transactions on Pattern Analysis and Machine Intelligence
Emotionally Intelligent Agents for Human Resource Management
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 03
Performance analysis of acoustic emotion recognition for in-car conversational interfaces
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Robust correspondenceless 3-d iris location for immersive environments
ICIAP'05 Proceedings of the 13th international conference on Image Analysis and Processing
Emotion-Based smart recruitment system
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part IV
Hi-index | 0.00 |
This paper provides a new fully automatic framework to analyzefacial action units, the fundamental building blocks of facialexpression enumerated in Paul Ekman's FacialAction Coding System(FACS). The action units examined in this paper include upperfacial muscle movements suchas inner eyebrow raise, eye widening,and so forth, which combine to form facial expressions. Althoughprior method shave obtained high recognition rates for recognizingfacial action units, these methods either use manuallypre-processed image sequences or require human specification offacial features; thus, they have exploited substantial humanintervention. This paper presents a fully automatic method,requiring no such human specification. The system first robustlydetects the pupils using an infrared sensitive camera equipped withinfrared LEDs. For each frame, the pupil positions are used tolocalize and normalize eye and eyebrow regions, which are analyzedusing PCA to recover parameters that relate to the shape of thefacial features.These parameters are used as input to classifiersbased on Support Vector Machines to recognize upper facialactionunits and all their possible combinations. On a completelynatural dataset with lots of head movements, pose changes andocclusions, the new framework achieved a recognition accuracy of69.3% for each individual AU and an accuracyof 62.5% for allpossible AU combinations. This framework achieves a higherrecognition accuracy on the Cohn-KanadeAU-coded facial expressiondatabase, which has been previously used to evaluate other facialaction recognition system.