Sign Language Recognition Based on Position and Movement Using Multi-Stream HMM

  • Authors:
  • Masaru Maebatake;Iori Suzuki;Masafumi Nishida;Yasuo Horiuchi;Shingo Kuroiwa

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • ISUC '08 Proceedings of the 2008 Second International Symposium on Universal Communication
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In sign language, hand positions and movements represent meaning of words.Hence, we have been developing sign language recognition methods using both of hand positions and movements.However, in the previous studies, each feature has same weight to calculate the probability for the recognition.In this study, we propose a sign language recognition method by using a multi-stream HMM technique to show the importance of position and movement information for the sign language recognition.We conducted recognition experiments using 21,960 sign language word data.As a result, 75.6% recognition accuracy was obtained with the appropriate weight (position:movement=0.2:0.8), while 70.6% was obtained with the same weight.From the result, we can conclude that the hand movement is more important for the sign language recognition than the hand position.In addition, we conducted experiments to discuss the optimal number of the states and mixtures and the best accuracy was obtained by the 15 states and two mixtures for each word HMM.