International Journal of Computer Vision
Original Contribution: Stacked generalization
Neural Networks
Active shape models—their training and application
Computer Vision and Image Understanding
Representing and Recognizing the Visual Appearance of Materials using Three-dimensional Textons
International Journal of Computer Vision
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Three-Dimensional Shape and Motion Reconstruction for the Analysis of American Sign Language
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Recent developments in visual sign language recognition
Universal Access in the Information Society
Facial movement analysis in ASL
Universal Access in the Information Society
Spatio-temporal pyramid matching for sports videos
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
Kernel Codebooks for Scene Categorization
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part III
Tracking facial features using mixture of point distribution models
ICVGIP'06 Proceedings of the 5th Indian conference on Computer Vision, Graphics and Image Processing
Robust person-independent visual sign language recognition
IbPRIA'05 Proceedings of the Second Iberian conference on Pattern Recognition and Image Analysis - Volume Part I
ECCV'10 Proceedings of the 11th European conference on Trends and Topics in Computer Vision - Volume Part I
Hi-index | 0.00 |
Given that sign language is used as a primary means of communication by as many as two million deaf individuals in the U.S. and as augmentative communication by hearing individuals with a variety of disabilities, the development of robust, real-time sign language recognition technologies would be a major step forward in making computers equally accessible to everyone. However, most research in the field of sign language recognition has focused on the manual component of signs, despite the fact that there is critical grammatical information expressed through facial expressions and head gestures. We propose a novel framework for robust tracking and analysis of facial expression and head gestures, with an application to sign language recognition. We then apply it to recognition with excellent accuracy (≥=95%) of two classes of grammatical expressions, namely wh-questions and negative expressions. Our method is signer-independent and builds on the popular "bag-of-words" model, utilizing spatial pyramids to model facial appearance and temporal pyramids to represent patterns of head pose changes.