Toward automatic sign language recognition from web3D based scenes

  • Authors:
  • Kabil Jaballah;Mohamed Jemni

  • Affiliations:
  • High School Of Science and Techniques of Tunis, UTIC Research Laboratory, Tunisia;High School Of Science and Techniques of Tunis, UTIC Research Laboratory, Tunisia

  • Venue:
  • ICCHP'10 Proceedings of the 12th international conference on Computers helping people with special needs
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes the development of a 3D continuous sign language recognition system. Since many systems like WebSign[1], Vsigns[2] and eSign[3] are using Web3D standards to generate 3D signing avatars, 3D signed sentences are becoming common. Hidden Markov Models is the most used method to recognize sign language from video-based scenes, but in our case, since we are dealing with well formatted 3D scenes based on H-anim and X3D standards, Hidden Markov Models (HMM) is a too costly double stochastic process. We present a novel approach for sign language recognition using Longest Common Subsequence method. Our recognition experiments were based on a 500 signs lexicon and reach 99 % of accuracy.