Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Recognition Algorithm with Non-contact for Japanese Sign Language Using Morphological Analysis
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Signer-Independent Continuous Sign Language Recognition Based on SRN/HMM
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
A Real-Time Large Vocabulary Continuous Recognition System for Chinese Sign Language
PCM '01 Proceedings of the Second IEEE Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
Digital Image Processing Using MATLAB
Digital Image Processing Using MATLAB
Rapid Signer Adaptation for Isolated Sign Language Recognition
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Recognition of Arabic sign language alphabet using polynomial classifiers
EURASIP Journal on Applied Signal Processing
IEEE Transactions on Computers
Journal on Image and Video Processing
Large lexicon detection of sign language
HCI'07 Proceedings of the 2007 IEEE international conference on Human-computer interaction
Robust person-independent visual sign language recognition
IbPRIA'05 Proceedings of the Second Iberian conference on Pattern Recognition and Image Analysis - Volume Part I
A dynamic gesture recognition system for the Korean sign language (KSL)
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Computers and Electrical Engineering
Hi-index | 0.00 |
This paper presents a solution for user-independent recognition of isolated Arabic sign language gestures. The video-based gestures are preprocessed to segment out the hands of the signer based on color segmentation of the colored gloves. The prediction errors of consecutive segmented images are then accumulated into two images according to the directionality of the motion. Different accumulation weights are employed to further help preserve the directionality of the projected motion. Normally, a gesture is represented by hand movements; however, additional user-dependent head and body movements might be present. In the user-independent mode we seek to filter out such user-dependent information. This is realized by encapsulating the movements of the segmented hands in a bounding box. The encapsulated images of the projected motion are then transformed into the frequency domain using Discrete Cosine Transformation (DCT). Feature vectors are formed by applying Zonal coding to the DCT coefficients with varying cutoff values. Classification techniques such as KNN and polynomial classifiers are used to assess the validity of the proposed user-independent feature extraction schemes. An average classification rate of 87% is reported.