American sign language recognition with the kinect

  • Authors:
  • Zahoor Zafrulla;Helene Brashear;Thad Starner;Harley Hamilton;Peter Presti

  • Affiliations:
  • Georgia Institute of Technology, Atlanta, GA, USA;Tin Man Labs, LLC, Austin, TX, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Georgia Institute of Technology, Atlanta, GA, USA

  • Venue:
  • ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We investigate the potential of the Kinect depth-mapping camera for sign language recognition and verification for educational games for deaf children. We compare a prototype Kinect-based system to our current CopyCat system which uses colored gloves and embedded accelerometers to track children's hand movements. If successful, a Kinect-based approach could improve interactivity, user comfort, system robustness, system sustainability, cost, and ease of deployment. We collected a total of 1000 American Sign Language (ASL) phrases across both systems. On adult data, the Kinect system resulted in 51.5% and 76.12% sentence verification rates when the users were seated and standing respectively. These rates are comparable to the 74.82% verification rate when using the current(seated) CopyCat system. While the Kinect computer vision system requires more tuning for seated use, the results suggest that the Kinect may be a viable option for sign verification.