Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Expression recognition using fuzzy spatio-temporal modeling
Pattern Recognition
A Facial Expression Recognition Approach Based on Novel Support Vector Machine Tree
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
Integrated Computer-Aided Engineering
ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part II
Boosted multi-resolution spatiotemporal descriptors for facial expression recognition
Pattern Recognition Letters
Novel multiclass classifiers based on the minimization of the within-class variance
IEEE Transactions on Neural Networks
Facial-component-based bag of words and PHOG descriptor for facial expression recognition
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
Towards detecting cognitive load and emotions in usability studies using the realEYES framework
UI-HCII'07 Proceedings of the 2nd international conference on Usability and internationalization
Facial expression recognition based on hybrid features and fusing discrete HMMs
ICVR'07 Proceedings of the 2nd international conference on Virtual reality
Expression recognition methods based on feature fusion
BI'10 Proceedings of the 2010 international conference on Brain informatics
Facial expression classification based on local spatiotemporal edge and texture descriptors
Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research
A new classifier for facial expression recognition: fuzzy buried Markov model
Journal of Computer Science and Technology
Facial expression recognition on hexagonal structure using LBP-based histogram variances
MMM'11 Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part II
Facial expression recognition using nonrigid motion parameters and shape-from-shading
CAIP'11 Proceedings of the 14th international conference on Computer analysis of images and patterns - Volume Part II
Adaptive facial expression recognition using inter-modal top-down context
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Fusion of feature sets and classifiers for facial expression recognition
Expert Systems with Applications: An International Journal
Dynamic facial expression recognition using longitudinal facial expression atlases
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Hough forest-based facial expression recognition from video sequences
ECCV'10 Proceedings of the 11th European conference on Trends and Topics in Computer Vision - Volume Part I
Facial expression recognition based on anatomy
Computer Vision and Image Understanding
Dynamic facial expression analysis based on extended spatio-temporal histogram of oriented gradients
International Journal of Biometrics
Hi-index | 0.00 |
The performance of an automatic facial expression recognition system can be significantly improved by modeling the reliability of different streams of facial expression information utilizing multistream hidden Markov models (HMMs). In this paper, we present an automatic multistream HMM facial expression recognition system and analyze its performance. The proposed system utilizes facial animation parameters (FAPs), supported by the MPEG-4 standard, as features for facial expression classification. Specifically, the FAPs describing the movement of the outer-lip contours and eyebrows are used as observations. Experiments are first performed employing single-stream HMMs under several different scenarios, utilizing outer-lip and eyebrow FAPs individually and jointly. A multistream HMM approach is proposed for introducing facial expression and FAP group dependent stream reliability weights. The stream weights are determined based on the facial expression recognition results obtained when FAP streams are utilized individually. The proposed multistream HMM facial expression system, which utilizes stream reliability weights, achieves relative reduction of the facial expression recognition error of 44% compared to the single-stream HMM system.