Australian sign language recognition
Machine Vision and Applications
Resolving hand over face occlusion
Image and Vision Computing
Hand Gesture Recognition Using Object Based Key Frame Selection
ICDIP '09 Proceedings of the International Conference on Digital Image Processing
Segmentation of the face and hands in sign language video sequences using color and motion cues
IEEE Transactions on Circuits and Systems for Video Technology
Head tracking and hand segmentation during hand over face occlusion in sign language
ECCV'10 Proceedings of the 11th European conference on Trends and Topics in Computer Vision - Volume Part I
Hi-index | 0.00 |
This paper proposes a method to detect and extract hand features from video sequences, where a person performs Thai Sign Language (TSL), for recognizing static TSL alphabets. First, the skin regions are segmented using trained skin color model represented in YCbCr color space. Next, Haar-like feature is used to label face and hands' initial positions for further tracking. In tracking, object hypothesis and template matching are employed to track face and hands even when occlusion occurs. The motion and shape of hands are used to determine gesture state and to extract sign key frames. In order to recognize TSL alphabets, first the hand postures are classified into groups using the convexity defect points of hand shape. Then, the Hu moments of hand shape are matched within group using K-nearest neighbor. Results are shown for several videos of a professional TSL interpreter signing 42 TSL alphabets.