Combining two synchronisation methods in a linguistic model to describe sign language

  • Authors:
  • Michael Filhol

  • Affiliations:
  • LIMSI-CNRS, Orsay Cedex, France

  • Venue:
  • GW'11 Proceedings of the 9th international conference on Gesture and Sign Language in Human-Computer Interaction and Embodied Communication
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The context is Sign Language modelling for synthesis with 3d virtual signers as output. Sign languages convey multi-linear information, hence allow for many synchronisation patterns between the articulators of the body. Addressing the problem that current models usually at best only cover one type of those patterns, and in the wake of the recent description model Zebedee, we introduce the Azalee extension, made to enable the description of any type of synchronisation in Sign Language.