Sign language recognition using a combination of new vision based features

  • Authors:
  • Mahmoud M. Zaki;Samir I. Shaheen

  • Affiliations:
  • Computer Engineering Department, Cairo University, Cairo, Egypt;Computer Engineering Department, Cairo University, Cairo, Egypt

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2011

Quantified Score

Hi-index 0.10

Visualization

Abstract

Sign languages are based on four components hand shape, place of articulation, hand orientation, and movement. This paper presents a novel combination of vision based features in order to enhance the recognition of underlying signs. Three features are selected to be mapped to these four components. Two of these features are newly introduced for American sign language recognition: kurtosis position and principal component analysis, PCA. Although PCA has been used before in sign a language as a dimensionality reduction technique, it is used here as a descriptor that represents a global image feature to provide a measure for hand configuration and hand orientation. Kurtosis position is used as a local feature for measuring edges and reflecting the place of articulation recognition. The third feature is motion chain code that represents the hand movement. On the basis of these features a prototype is designed, constructed and its performance is evaluated. It consists of skin color detector, connected component locator and dominant hand tracker, feature extractor and a Hidden Markov Model classifier. The input to the system is a sign from RWTH-BOSTON-50 database and the output is the corresponding word with a recognition error rate of 10.90%.