A similarity measure for motion stream segmentation and recognition
MDM '05 Proceedings of the 6th international workshop on Multimedia data mining: mining integrated media and complex data
Providing signed content on the Internet by synthesized animation
ACM Transactions on Computer-Human Interaction (TOCHI)
A knowledge-based sign synthesis architecture
Universal Access in the Information Society
Recognition of gestures in Pakistani sign language using fuzzy classifier
ISCGAV'08 Proceedings of the 8th conference on Signal processing, computational geometry and artificial vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Chinese sign language recognition system based on SOFM/SRN/HMM
Pattern Recognition
A Vision-Based Taiwanese Sign Language Recognition
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Utilizing invariant descriptors for finger spelling American sign language using SVM
ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part I
A Web-Based Sign Language Translator Using 3D Video Processing
NBIS '11 Proceedings of the 2011 14th International Conference on Network-Based Information Systems
American sign language recognition with the kinect
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Mobile sign language translation system for deaf community
Proceedings of the International Cross-Disciplinary Conference on Web Accessibility
Hi-index | 0.00 |
Current researches in sign language computer recognition, aim to recognize signs from video content. The majority of existing studies of sign language recognition from video-based scenes use classical learning approach due to their acceptable results. HMM, Neural Network, Matching techniques or Fuzzy classifier; are very used in video recognition with large training data. Up to day, there is a considerable progress in animation generation field. These tools contribute to improve the accessibility to information and to services for deaf individuals with low literacy level. They rely mainly on 3D-based content standard (X3D) in their sign language animation. Therefore, signs animations are becoming common. However in this new field, there are few works that try to apply the classical learning techniques for sign language recognition from 3D-based content. The majority of studies rely on positions or rotations of virtual agent articulations as training data for classifiers or for matching techniques. Unfortunately, existing animation generation software use different 3D virtual agent content, therefore, articulation positions or rotations differ from system to other. Consequently this recognition method is not efficient. In this paper, we propose a methodological foundation for future research to recognize signs from any sign language 3D content. Our new approach aims to provide an invariant to sign position changes method based on 3D motion trajectory analysis. Our recognition experiments were based on 900 ASL signs using Microsoft kinect sensor to manipulate our X3D virtual agent. We have successfully recognized 887 isolated signs with 98.5 recognition rate and 0.3 second as recognition response time.