C4.5: programs for machine learning
C4.5: programs for machine learning
Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
A framework for recognizing the simultaneous aspects of American sign language
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Machine Learning
The Recognition Algorithm with Non-contact for Japanese Sign Language Using Morphological Analysis
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
A Real-Time Continuous Gesture Recognition System for Sign Language
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Gesture Modeling and Recognition Using Finite State Machines
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Relevant Features for Video-Based Continuous Sign Language Recognition
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Signer-Independent Sign Language Recognition Based on SOFM/HMM
RATFG-RTS '01 Proceedings of the IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS'01)
A dynamic gesture recognition system for the Korean sign language (KSL)
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Gesture-driven American sign language phraselator
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Development of an American Sign Language game for deaf children
Proceedings of the 2005 conference on Interaction design and children
Activity Recognition of Assembly Tasks Using Body-Worn Microphones and Accelerometers
IEEE Transactions on Pattern Analysis and Machine Intelligence
American sign language recognition in game development for deaf children
Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility
Automatic recognition of sign language subwords based on portable accelerometer and EMG sensors
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Towards a one-way American sign language translator
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
American sign language recognition with the kinect
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Kinect-based visual communication system
Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
Hi-index | 0.00 |
The major difficulty for large vocabulary sign language or gesture recognition lies in the huge search space due to a variety of recognized classes. How to reduce the recognition time without loss of accuracy is a challenge issue. In this paper, a hierarchical decision tree is first presented for large vocabulary sign language recognition based on the divide-and-conquer principle. As each sign feature has the different importance to gestures, the corresponding classifiers are proposed for the hierarchical decision to gesture attributes. One- or two- handed classifier with little computational cost is first used to eliminate many impossible candidates. The subsequent hand shape classifier is performed on the possible candidate space. SOFM/HMM classifier is employed to get the final results at the last non-leaf nodes that only include few candidates. Experimental results on a large vocabulary of 5113-signs show that the proposed method drastically reduces the recognition time by 11 times and also improves the recognition rate about 0.95% over single SOFM/HMM.