Scale-Space Properties of the Multiscale Morphological Dilation-Erosion
IEEE Transactions on Pattern Analysis and Machine Intelligence
AANN: an alternative to GMM for pattern recognition
Neural Networks
Recognizing Lower Face Action Units for Facial Expression Analysis
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Analysis of emotion recognition using facial expressions, speech and multimodal information
Proceedings of the 6th international conference on Multimodal interfaces
Multimodal person authentication using speech, face and visual speech
Computer Vision and Image Understanding
Frontal face authentication using discriminating grids withmorphological feature vectors
IEEE Transactions on Multimedia
Emotion recognition from speech using global and local prosodic features
International Journal of Speech Technology
Characterization and recognition of emotions from speech using excitation source information
International Journal of Speech Technology
Bee royalty offspring algorithm for improvement of facial expressions classification model
International Journal of Bio-Inspired Computation
Film segmentation and indexing using autoassociative neural networks
International Journal of Speech Technology
Hi-index | 12.05 |
In this paper, facial features from the video sequence are explored for characterizing the emotions. The emotions considered for this study are Anger, Fear, Happy, Sad and Neutral. For carrying out the proposed emotion recognition study, the required video data is collected from the studio, Center for Education Technology (CET), at Indian Institute of Technology (IIT) Kharagpur. The dynamic nature of the grey values of the pixels within the eye and mouth regions are used as the features to capture the emotion specific knowledge from the facial expressions. Multiscale morphological erosion and dilation operations are used to extract features from eye and mouth regions, respectively. The features extracted from left eye, right eye and mouth regions are used to develop the separate models for each emotion category. Autoassociative neural network (AANN) models are used to capture the distribution of the extracted features. The developed models are validated using subject dependent and independent emotion recognition studies. The overall performance of the proposed emotion recognition system is observed to be about 87%.