A Computational Approach to Edge Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature extraction from faces using deformable templates
International Journal of Computer Vision
Recognizing Human Facial Expressions From Long Image Sequences Using Optical Flow
IEEE Transactions on Pattern Analysis and Machine Intelligence
Coding, Analysis, Interpretation, and Recognition of Facial Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Network-Based Face Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition: Features Versus Templates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Dual-State Parametric Eye Tracking
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Parameterized Modeling and Recognition of Activities
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Locating and extracting the eye in human face images
Pattern Recognition
Human expression recognition from motion using a radial basis function network architecture
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multi-state face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, 6 upper face AUs, and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4% (95.4% if neutral expressions are excluded) for upper face AUs and 96.7% (95.6% with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.