Enhancing a Sign Language Translation System with Vision-Based Features

  • Authors:
  • Philippe Dreuw;Daniel Stein;Hermann Ney

  • Affiliations:
  • Human Language Technology and Pattern Recognition, RWTH Aachen University,;Human Language Technology and Pattern Recognition, RWTH Aachen University,;Human Language Technology and Pattern Recognition, RWTH Aachen University,

  • Venue:
  • Gesture-Based Human-Computer Interaction and Simulation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In automatic sign language translation, one of the main problems is the usage of spatial information in sign language and its proper representation and translation, e.g. the handling of spatial reference points in the signing space. Such locations are encoded at static points in signing space as spatial references for motion events. We present a new approach starting from a large vocabulary speech recognition system which is able to recognize sentences of continuous sign language speaker independently. The manual features obtained from the tracking are passed to the statistical machine translation system to improve its accuracy. On a publicly available benchmark database, we achieve a competitive recognition performance and can similarly improve the translation performance by integrating the tracking features.