Model-based coding of images
Distributed representation and analysis of visual motion
Distributed representation and analysis of visual motion
Active shape models—their training and application
Computer Vision and Image Understanding
Recognizing Human Facial Expressions From Long Image Sequences Using Optical Flow
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic Interpretation and Coding of Face Images Using Flexible Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Coding, Analysis, Interpretation, and Recognition of Facial Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion
International Journal of Computer Vision
Support Vector Machines for 3D Object Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Automatic Analysis of Facial Expressions: The State of the Art
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognizing Action Units for Facial Expression Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Recognition of Human Movement Using Temporal Templates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Detecting Faces in Images: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
Facial Expression Recognition and Its Degree Estimation
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Connected Vibrations: A Modal Analysis Approach for Non-Rigid Motion Tracking
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Face and Facial Feature Extraction from Color Image
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Feature-Point Tracking by Optical Flow Discriminates Subtle Differences in Facial Expression
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Expression Recognition from Time-Sequential Facial Images by Use of Expression Change Model
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Comprehensive Database for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Model-Based Face Tracking for View-Independent Facial Expression Recognition
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Facial expression recognition from video sequences: temporal and static modeling
Computer Vision and Image Understanding - Special issue on Face recognition
An active model for facial feature tracking
EURASIP Journal on Applied Signal Processing
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
SCoPE: Towards a Systolic Array for SVM Object Detection
IEEE Embedded Systems Letters
A real-time automated system for the recognition of human facial expressions
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Image Processing
Support vector machines for spam categorization
IEEE Transactions on Neural Networks
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper proposes a method for facial expression recognition in image sequences. Face is detected from the scene and then facial features are detected using image normalization, and thresholding techniques. Using an optimization algorithm the Candide wire frame model is adapted properly on the first frame of face image sequence. In the subsequent frames of image sequence facial features are tracked using active appearance algorithm. Once the model fits on the first frame, animation parameters of model are set to zero, to obtain the shape of model for the neutral facial expression of the same face. The last frame of the image sequence corresponds to greatest facial expression intensity. The geometrical displacement of the Candide wire frame nodes, between the neutral expression frame and the last frame, is used as an input to the multiclass support vector machine, which classifies facial expression into one of the class such as happy, surprise, sadness, anger, disgust, fear and neutral. This method is applicable for frontal as well as tilted faces with angle $$\pm 30\,^{\circ }, \pm 45\,^{\circ }, \pm 60\,^{\circ }$$ 卤 30 驴 , 卤 45 驴 , 卤 60 驴 with respect to y axis.