Static gesture quantization and DCT based sign language generation

  • Authors:
  • Chenxi Zhang;Feng Jiang;Hongxun Yao;Guilin Yao;Wen Gao

  • Affiliations:
  • School of Computer Science and Technology, Harbin Institute of Technology, P.R.C;School of Computer Science and Technology, Harbin Institute of Technology, P.R.C;School of Computer Science and Technology, Harbin Institute of Technology, P.R.C;School of Computer Science and Technology, Harbin Institute of Technology, P.R.C;School of Computer Science and Technology, Harbin Institute of Technology, P.R.C

  • Venue:
  • ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

To collect data for sign language recognition is not a trivial task. The lack of training data has become a bottleneck in the research of singer independence and large vocabulary recognition. A novel sign language generation algorithm is introduced in this paper. The difference between signers is analyzed briefly and a criterion is introduced to distinguish the same gesture words of different signers. Basing on that criterion we propose a sign word generation method combining the static gesture quantization and Discrete Cosine Transform (DCT), which can generate the new signers’ sign words according to the existed signers’ sign words. The experimental result shows that not only the data generated are distinct with the training data, they are also demonstrated effective.