The nature of statistical learning theory
The nature of statistical learning theory
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
A nonparametric measure of the overlapping coefficient
Computational Statistics & Data Analysis
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust shape-based head tracking
ACIVS'07 Proceedings of the 9th international conference on Advanced concepts for intelligent vision systems
Multimodal coordination of facial action, head rotation, and eye motion during spontaneous smiles
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Context-independent facial action unit recognition using shape and gabor phase information
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
GPSO versus GA in facial emotion detection
International Journal of Artificial Intelligence and Soft Computing
Hi-index | 0.00 |
The face is an important source of information in multimodal communication. Facial expressions are generated by contractions of facial muscles, which lead to subtle changes in the area of the eyelids, eye brows, nose, lips and skin texture, often revealed by wrinkles and bulges. To measure these subtle changes, Ekman et al.[5] developed the Facial Action Coding System (FACS). FACS is a human-observer-based system designed to detect subtle changes in facial features, and describes facial expressions by action units (AUs). We present a technique to automatically recognize lower facial Action Units, independently from one another. Even though we do not explicitly take into account AU combinations, thereby making the classification process harder, an average F1 score of 94.83% is achieved.