Methodological foundation for sign language 3d motion trajectory analysis

  • Authors:
  • Mehrez Boulares;Mohamed Jemni

  • Affiliations:
  • Research Laboratory of Technologies of Information and Communication & Electrical Ingineering (LaTICE), Ecole Supérieure des Sciences et Techniques de Tunis, Tunis, Tunisia;Research Laboratory of Technologies of Information and Communication & Electrical Ingineering (LaTICE), Ecole Supérieure des Sciences et Techniques de Tunis, Tunis, Tunisia

  • Venue:
  • IDA'12 Proceedings of the 11th international conference on Advances in Intelligent Data Analysis
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current researches in sign language computer recognition, aim to recognize signs from video content. The majority of existing studies of sign language recognition from video-based scenes use classical learning approach due to their acceptable results. HMM, Neural Network, Matching techniques or Fuzzy classifier; are very used in video recognition with large training data. Up to day, there is a considerable progress in animation generation field. These tools contribute to improve the accessibility to information and to services for deaf individuals with low literacy level. They rely mainly on 3D-based content standard (X3D) in their sign language animation. Therefore, signs animations are becoming common. However in this new field, there are few works that try to apply the classical learning techniques for sign language recognition from 3D-based content. The majority of studies rely on positions or rotations of virtual agent articulations as training data for classifiers or for matching techniques. Unfortunately, existing animation generation software use different 3D virtual agent content, therefore, articulation positions or rotations differ from system to other. Consequently this recognition method is not efficient. In this paper, we propose a methodological foundation for future research to recognize signs from any sign language 3D content. Our new approach aims to provide an invariant to sign position changes method based on 3D motion trajectory analysis. Our recognition experiments were based on 900 ASL signs using Microsoft kinect sensor to manipulate our X3D virtual agent. We have successfully recognized 887 isolated signs with 98.5 recognition rate and 0.3 second as recognition response time.