A framework for continuous multimodal sign language recognition

  • Authors:
  • Daniel Kelly;Jane Reilly Delannoy;John Mc Donald;Charles Markham

  • Affiliations:
  • N.U.I. Maynooth, Maynooth, Co. Kildare, Ireland;N.U.I. Maynooth, Maynooth, Co. Kildare, Ireland;N.U.I. Maynooth, Maynooth, Co. Kildare, Ireland;N.U.I. Maynooth, Maynooth, Co. Kildare, Ireland

  • Venue:
  • Proceedings of the 2009 international conference on Multimodal interfaces
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a multimodal system for the recognition of manual signs and non-manual signals within continuous sign language sentences. In sign language, information is mainly conveyed through hand gestures (Manual Signs). Non-manual signals, such as facial expressions, head movements, body postures and torso movements, are used to express a large part of the grammar and some aspects of the syntax of sign language. In this paper we propose a multichannel HMM based system to recognize manual signs and non-manual signals. We choose a single non-manual signal, head movement, to evaluate our framework when recognizing non-manual signals. Manual signs and non-manual signals are processed independently using continuous multidimensional HMMs and a HMM threshold model. Experiments conducted demonstrate that our system achieved a detection ratio of 0.95 and a reliability measure of 0.93.