Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
Video Coding: An Introduction to Standard Codecs
Video Coding: An Introduction to Standard Codecs
Extraction of 2D Motion Trajectories and Its Application to Hand Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Digital Image Processing (3rd Edition)
Digital Image Processing (3rd Edition)
Recognition of Arabic sign language alphabet using polynomial classifiers
EURASIP Journal on Applied Signal Processing
Heterogeneous video transcoding to lower spatio-temporalresolutions and different encoding formats
IEEE Transactions on Multimedia
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Low complexity classification system for glove-based arabic sign language recognition
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Hi-index | 0.00 |
This work introduces two novel approaches for feature extraction applied to video-based Arabic sign language recognition, namely, motion representation through motion estimation and motion representation through motion residuals. In the former, motion estimation is used to compute the motion vectors of a video-based deaf sign or gesture. In the preprocessing stage for feature extraction, the horizontal and vertical components of such vectors are rearranged into intensity images and transformed into the frequency domain. In the second approach, motion is represented through motion residuals. The residuals are then thresholded and transformed into the frequency domain. Since in both approaches the temporal dimension of the video-based gesture needs to be preserved, hidden Markov models are used for classification tasks. Additionally, this paper proposes to project the motion information in the time domain through either telescopic motion vector composition or polar accumulated differences of motion residuals. The feature vectors are then extracted from the projected motion information. After that, model parameters can be evaluated by using simple classifiers such as Fisher's linear discriminant. The paper reports on the classification accuracy of the proposed solutions. Comparisons with existing work reveal that up to 39% of the misclassifications have been corrected.