Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion
International Journal of Computer Vision
Automatic Analysis of Facial Expressions: The State of the Art
IEEE Transactions on Pattern Analysis and Machine Intelligence
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Facial expression recognition from video sequences: temporal and static modeling
Computer Vision and Image Understanding - Special issue on Face recognition
Active Appearance Models Revisited
International Journal of Computer Vision
Layered representations for learning and inferring office activity from multiple sensory channels
Computer Vision and Image Understanding - Special issue on event detection in video
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognizing Facial Expressions by Tracking Feature Shapes
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 02
Hidden Conditional Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Facial movement analysis in ASL
Universal Access in the Information Society
Head Pose Estimation in Computer Vision: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
In American Sign Language (ASL) the structure of signed sentences is conveyed by grammatical markers which are represented by facial feature movements and head motions. Without recovering grammatical markers, a sign language recognition system cannot fully reconstruct a signed sentence. However, this problem has been largely neglected in the literature. In this paper, we propose to use a 2-layer Conditional Random Field model for recognizing continuously signed grammatical markers in ASL. This recognition requires identifying both facial feature movements and head motions while dealing with uncertainty introduced by movement epenthesis and other effects. We used videos of the signers' faces, recorded while they signed simple sentences containing multiple grammatical markers. In our experiments, the proposed classifier yielded a precision rate of 93.76% and a recall rate of 85.54%.