Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
Hand posture matching for Irish Sign language interpretation
ISICT '03 Proceedings of the 1st international symposium on Information and communication technologies
Gesture-driven American sign language phraselator
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
CBIR approach to the recognition of a sign language alphabet
CompSysTech '07 Proceedings of the 2007 international conference on Computer systems and technologies
Nearest neighbor search methods for handshape recognition
Proceedings of the 1st international conference on PErvasive Technologies Related to Assistive Environments
Proceedings of the 1st international conference on PErvasive Technologies Related to Assistive Environments
Proceedings of the 2008 ACM workshop on Search in social media
No bull, no spin: a comparison of tags with other forms of user metadata
Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries
Detecting moving objects, ghosts, and shadows in video streams
IEEE Transactions on Pattern Analysis and Machine Intelligence
Identifying Sign Language Videos in Video Sharing Sites
ACM Transactions on Accessible Computing (TACCESS)
Hi-index | 0.00 |
Video sharing sites provide an opportunity for the collection and use of sign language presentations about a wide range of topics. Currently, locating sign language videos (SL videos) in such sharing sites relies on the existence and accuracy of tags, titles or other metadata indicating the content is in sign language. In this paper, we describe the design and evaluation of a classifier for distinguishing between sign language videos and other videos. A test collection of SL videos and videos likely to be incorrectly recognized as SL videos (likely false positives) was created for evaluating alternative classifiers. Five video features thought to be potentially valuable for this task were developed based on common video analysis techniques. A comparison of the relative value of the five video features shows that a measure of the symmetry of movement relative to the face is the best feature for distinguishing sign language videos. Overall, an SVM classifier provided with all five features achieves 82% precision and 90% recall when tested on the challenging test collection. The performance would be considerably higher when applied to the more varied collections of large video sharing sites.