American sign language recognition in game development for deaf children

  • Authors:
  • Helene Brashear;Valerie Henderson;Kwang-Hyun Park;Harley Hamilton;Seungyon Lee;Thad Starner

  • Affiliations:
  • College of Computing, Atlanta, Georgia;College of Computing, Atlanta, Georgia;Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea;Atlanta Area School for the Deaf, Clarkston, Georgia;College of Computing, Atlanta, Georgia;College of Computing, Atlanta, Georgia

  • Venue:
  • Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2006

Quantified Score

Hi-index 0.02

Visualization

Abstract

CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.