Appearance-Based recognition of words in american sign language

  • Authors:
  • Morteza Zahedi;Daniel Keysers;Hermann Ney

  • Affiliations:
  • Lehrstuhl für Informatik VI – Computer Science Department, RWTH Aachen University, Aachen, Germany;Lehrstuhl für Informatik VI – Computer Science Department, RWTH Aachen University, Aachen, Germany;Lehrstuhl für Informatik VI – Computer Science Department, RWTH Aachen University, Aachen, Germany

  • Venue:
  • IbPRIA'05 Proceedings of the Second Iberian conference on Pattern Recognition and Image Analysis - Volume Part I
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present how appearance-based features can be used for the recognition of words in American sign language (ASL) from a video stream. The features are extracted without any segmentation or tracking of the hands or head of the signer, which avoids possible errors in the segmentation step. Experiments are performed on a database that consists of 10 words in ASL with 110 utterances in total. These data are extracted from a publicly available collection of videos and can therefore be used by other research groups. The video streams of two stationary cameras are used for classification, but we observe that one camera alone already leads to sufficient accuracy. Hidden Markov Models and the leaving one out method are employed for training and classification. Using the simple appearance-based features, we achieve an error rate of 7%. About half of the remaining errors are due to words that are visually different from all other utterances.