Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
Movement Phase in Signs and Co-Speech Gestures, and Their Transcriptions by Human Coders
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Recovering the Temporal Structure of Natural Gesture
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Invariant features for 3-D gesture recognition
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Definition and recovery of kinematic features for recognition of American sign language movements
Image and Vision Computing
Sign Language Recognition by Combining Statistical DTW and Independent Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Person-Independent 3D Sign Language Recognition
Gesture-Based Human-Computer Interaction and Simulation
Modelling and segmenting subunits for sign language recognition based on hand motion analysis
Pattern Recognition Letters
Modelling and recognition of the linguistic components in American Sign Language
Image and Vision Computing
Influence of handshape information on automatic sign language recognition
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
Robust person-independent visual sign language recognition
IbPRIA'05 Proceedings of the Second Iberian conference on Pattern Recognition and Image Analysis - Volume Part I
International Journal of Computational Vision and Robotics
Hi-index | 0.00 |
Human perception of sign language can serve as inspiration for the improvement of automatic recognition systems. Experiments with human signers show that sign language signs contain redundancy over time. In this article, experiments are conducted to investigate whether comparable redundancies also exist for an automatic sign language recognition system. Such redundancies could be exploited, for example, by reserving more processing resources for the more informative phases of a sign, or by discarding uninformative phases. In the experiments, an automatic system is trained and tested on isolated fragments of sign language signs. The stimuli used were similar to those of the human signer experiments, allowing us to compare the results. The experiments show that redundancy over time exists for the automatic recognizer. The central phase of a sign is the most informative phase, and the first half of a sign is sufficient to achieve a recognition performance similar to that of the entire sign. These findings concur with the results of the human signer studies. However, there are differences as well, most notably the fact that human signers score better on the early phases of a sign than the automatic system. The results can be used to improve the automatic recognizer, by using only the most informative phases of a sign as input.