Definition and recovery of kinematic features for recognition of American sign language movements

  • Authors:
  • Konstantinos G. Derpanis;Richard P. Wildes;John K. Tsotsos

  • Affiliations:
  • York University, Department of Computer Science and Engineering, 4700 Keele Street, Toronto, Ont., Canada M3J 1P3 and York University, Centre for Vision Research (CVR), 4700 Keele Street, Toronto, ...;York University, Department of Computer Science and Engineering, 4700 Keele Street, Toronto, Ont., Canada M3J 1P3 and York University, Centre for Vision Research (CVR), 4700 Keele Street, Toronto, ...;York University, Department of Computer Science and Engineering, 4700 Keele Street, Toronto, Ont., Canada M3J 1P3 and York University, Centre for Vision Research (CVR), 4700 Keele Street, Toronto, ...

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

An approach to recognizing human hand gestures from a monocular temporal sequence of images is presented. Of concern is the representation and recognition of hand movements that are used in single-handed American sign language (ASL). The approach exploits previous linguistic analysis of manual languages that decompose dynamic gestures into their static and dynamic components. The first level of decomposition is in terms of three sets of primitives, hand shape, location and movement. Further levels of decomposition involve the lexical and sentence levels and are beyond the scope of the present paper. We propose and subsequently demonstrate that given a monocular gesture sequence, kinematic features can be recovered from the apparent motion that provide distinctive signatures for 14 primitive movements of ASL. The approach has been implemented in software and evaluated on a database of 592 gesture sequences with an overall recognition rate of 86% for fully automated processing and 97% for manually initialized processing.