SignSynth: A Sign Language Synthesis Application Using Web3D and Perl

  • Authors:
  • Angus B. Grieve-Smith

  • Affiliations:
  • -

  • Venue:
  • GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sign synthesis (also known as text-to-sign) has recently seen a large increase in the number of projects under development. Many of these focus on translation from spoken languages, but other applications include dictionaries and language learning. I will discuss the architecture of typical sign synthesis applications and mention some of the applications and prototypes currently available. I will focus on SignSynth, a CGI-based articulatory sign synthesis prototype I am developing at the University of New Mexico. SignSynth takes as its input a sign language text in ASCII-Stokoe notation (chosen as a simple starting point) and converts it to an internal feature tree. This underlying linguistic representation is then converted into a three-dimensionala nimation sequence in Virtual Reality Modeling Language (VRML or Web3D), which is automatically rendered by a Web3D browser.