A teaching system of Japanese sign language using sign language recognition and generation
Proceedings of the tenth ACM international conference on Multimedia
A Vision-Based Method for Recognizing Non-manual Information in Japanese Sign Language
ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning
IEEE Transactions on Pattern Analysis and Machine Intelligence
American sign language recognition in game development for deaf children
Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility
Improving the efficacy of automated sign language practice tools
ACM SIGACCESS Accessibility and Computing - ASSETS 2007 doctoral consortium
A study of sign language coarticulation
ACM SIGACCESS Accessibility and Computing
Towards automated large vocabulary gesture search
Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments
A Similarity Measure for Vision-Based Sign Recognition
UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Part III: Applications and Services
A framework for continuous multimodal sign language recognition
Proceedings of the 2009 international conference on Multimodal interfaces
A Chinese sign language recognition system based on SOFM/SRN/HMM
Pattern Recognition
A database-based framework for gesture recognition
Personal and Ubiquitous Computing
Transition movement models for large vocabulary continuous sign language recognition
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
American sign language recognition with the kinect
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Kinect-based visual communication system
Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
Rule-based trajectory segmentation for modeling hand motion trajectory
Pattern Recognition
Hi-index | 0.00 |
A JSL (Japanese Sign Language) sentence is represented by connecting several signed words continuously. And there are transitions between the signed words that have no meaning in the signed sentence. Therefore, to translate JSL into Japanese, first, it is necessary to detect each signed word from the inputted gesture of a JSL sentence with high accuracy. And then, a proper sequence of the recognized signed words must be generated. To achieve this, we have developed (1) a method for effectively detecting the borders of the signed words from ordinary signed gestures and segmenting them, (2) a method for detecting whether the signed gesture is represented by one hand or both hands, and (3) a method for distinguishing the segments representing the singed words from the segments representing the transitions. We have carried out an experiment with 200 samples of 10 JSL sentences to recognize the sequences of the signed words using the developed methods. As the result, the accuracy for the word is improved from 77.6% to 86.6%, and the accuracy for the sentence is improved from 46.0% to 58.0% by using the developed methods. These results indicate that the developed methods are effective.