MDS: a multimodal-based dialog system
MULTIMEDIA '00 Proceedings of the eighth ACM international conference on Multimedia
A Parallel Multistream Model for Integration of Sign Language Recognition and Lip Motion
ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
HandTalker: A Multimodal Dialog System Using Sign Language and 3-D Virtual Human
ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
Multimodal interface for human-machine communication
Towards automated large vocabulary gesture search
Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments
A Similarity Measure for Vision-Based Sign Recognition
UAHCI '09 Proceedings of the 5th International Conference on Universal Access in Human-Computer Interaction. Part III: Applications and Services
A database-based framework for gesture recognition
Personal and Ubiquitous Computing
Finding recurrent patterns from continuous sign language sentences for automated extraction of signs
The Journal of Machine Learning Research
Hidden Markov model for human to computer interaction: a study on human hand gesture recognition
Artificial Intelligence Review
Hi-index | 0.04 |
In this paper, we describe a system for recognizing both the isolated and continuous Chinese Sign Language (CSL) using two Cybergloves and two 3SAPCE-position trackers as gesture input devices. To get robust gesture features, each joint-angle collected by Cybergloves is normalized. The relative position and orientation of the left hand to those of the right hand are proposed as the signer position independent features. To speed up the recognition process, a fast match and a frame predicting techniques are proposed. To tackle epenthesis movement problem, context-dependent models are obtained by the Dynamic Programming (DP) technique. HMMs are utilized to model basic word units. Then we describe training techniques of the bigram language model and the search algorithm used in our baseline system. The baseline system converts sentence level gestures into synthesis speech and gestures of 3D virtual human synchronously. Experiments show that these techniques are efficient both in recognition speed and recognition performance.