Adaptive filter theory
Adaptive control using neural networks
Neural networks for control
A Learning-Based Prediction-and-Verification Segmentation Scheme for Hand Sign Image Sequence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real-time American Sign Language recognition from video using hidden Markov models
ISCV '95 Proceedings of the International Symposium on Computer Vision
ASL Recognition Based on a Coupling Between HMMs and 3D Motion Analysis
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Gesture-based interaction and communication: automated classification of hand gesture contours
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Enhancing hand gesture recognition using fuzzy clustering-based mixture-of-experts model
Proceedings of the 5th International Conference on Ubiquitous Information Management and Communication
American Sign Language word recognition with a sensory glove using artificial neural networks
Engineering Applications of Artificial Intelligence
Adaptive mixture-of-experts models for data glove interface with multiple users
Expert Systems with Applications: An International Journal
Hi-index | 0.01 |
Sign language (SL), which is a highly visual-spatial, linguistically complete, and natural language, is the main mode of communication among deaf people. Described in this paper are two different American Sign Language (ASL) word recognition systems developed using artificial neural networks (ANN) to translate the ASL words into English. Feature vectors of signing words taken at five time instants were used in the first system, while histograms of feature vectors of signing words were used in the second system. The systems use a sensory glove, Cyberglove(TM), and a Flock of Birds^(R) 3-D motion tracker to extract the gesture features. The finger joint angle data obtained from strain gauges in the sensory glove define the hand shape, and the data from the tracker describe the trajectory of hand movement. In both systems, the data from these devices were processed by two neural networks: a velocity network and a word recognition network. The velocity network uses hand speed to determine the duration of words. Signs are defined by feature vectors such as hand shape, hand location, orientation, movement, bounding box, and distance. The second network was used as a classifier to convert ASL signs into words based on features or histograms of these features. We trained and tested our ANN models with 60 ASL words for a different number of samples. These methods were compared with each other. Our test results show that the accuracy of recognition of these two systems is 92% and 95%, respectively.