Segmenting Hands of Arbitrary Color
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Robust Finger Tracking with Multiple Cameras
RATFG-RTS '99 Proceedings of the International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems
Signer-Independent Sign Language Recognition Based on SOFM/HMM
RATFG-RTS '01 Proceedings of the IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS'01)
CBIR over multiple projections of 3D objects
BioID_MultiComm'09 Proceedings of the 2009 joint COST 2101 and 2102 international conference on Biometric ID management and multimodal communication
Design and evaluation of classifier for identifying sign language videos in video sharing sites
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Identifying Sign Language Videos in Video Sharing Sites
ACM Transactions on Accessible Computing (TACCESS)
Hi-index | 0.01 |
The task of recognizing letters from the sign language alphabet, by means of which the hearing impaired people finger-spell words and proper nouns, is interpreted as a CBIR (Content Based Image Retrieval) problem. An arbitrary input image of given sign (palm gesture) is treated as a sample for search within a database (DB), which contains a large enough set of images (i.e. projections from a sufficient number of view points) for each letter of the sign language alphabet. We assume that the gestures for recognition are static images, which have been appropriately extracted from the input video sequence. In addition, we have at our disposal a CBIR method for image DB access that is simultaneously fast enough and noise-tolerant. The paper describes both the methodology used for building up the DB of image samples and the experimental study for the noise-tolerance of the available CBIR method. The latter is used to acknowledge the applicability of the proposed approach.