Skin-Color Modeling and Adaptation
ACCV '98 Proceedings of the Third Asian Conference on Computer Vision-Volume II
LAFTER: Lips and Face Real-Time Tracker
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
A Real-Time Continuous Gesture Recognition System for Sign Language
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Estimation of the Illuminant Color from Human Skin Color
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
ASL Recognition Based on a Coupling Between HMMs and 3D Motion Analysis
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Using Multiple Sensors for Mobile Sign Language Recognition
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
Georgia tech gesture toolkit: supporting experiments in gesture recognition
Proceedings of the 5th international conference on Multimodal interfaces
Large vocabulary sign language recognition based on hierarchical decision trees
Proceedings of the 5th international conference on Multimodal interfaces
Skin Color-Based Video Segmentation under Time-Varying Illumination
IEEE Transactions on Pattern Analysis and Machine Intelligence
A gesture-based american sign language game for deaf children
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Development of an American Sign Language game for deaf children
Proceedings of the 2005 conference on Interaction design and children
A new instrumented approach for translating American sign language into sound and text
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Transition movement models for large vocabulary continuous sign language recognition
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Towards a one-way American sign language translator
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
A dynamic gesture recognition system for the Korean sign language (KSL)
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Improving the efficacy of automated sign language practice tools
ACM SIGACCESS Accessibility and Computing - ASSETS 2007 doctoral consortium
Interaction Design and Children
Foundations and Trends in Human-Computer Interaction
Modelling and recognition of the linguistic components in American Sign Language
Image and Vision Computing
ICLS '10 Proceedings of the 9th International Conference of the Learning Sciences - Volume 2
Vision-based hand-gesture applications
Communications of the ACM
Advances in game accessibility from 2005 to 2010
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: users diversity - Volume Part II
American sign language recognition with the kinect
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Hidden Markov model for human to computer interaction: a study on human hand gesture recognition
Artificial Intelligence Review
Hi-index | 0.02 |
CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.