Modelling and segmenting subunits for sign language recognition based on hand motion analysis

  • Authors:
  • Junwei Han;George Awad;Alistair Sutherland

  • Affiliations:
  • School of Computing, Dublin City University, Dublin 9, Ireland;School of Computing, Dublin City University, Dublin 9, Ireland;School of Computing, Dublin City University, Dublin 9, Ireland

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2009

Quantified Score

Hi-index 0.10

Visualization

Abstract

Modelling and segmenting subunits is one of the important topics in sign language study. Many scholars have proposed the functional definition to subunits from the view of linguistics while the problem of efficiently implementing it using computer vision techniques is a challenge. On the other hand, a number of subunit segmentation work has been investigated for the task of vision-based sign language recognition whereas their subunits either somewhat lack the linguistic support or are improper. In this paper, we attempt to define and segment subunits using computer vision techniques, which also can be basically explained by sign language linguistics. A subunit is firstly defined as one continuous visual hand action in time and space, which comprises a series of interrelated consecutive frames. Then, a simple but efficient solution is developed to detect the subunit boundary using hand motion discontinuity. Finally, temporal clustering by dynamic time warping is adopted to merge similar segments and refine the results. The presented work does not need prior knowledge of the types of signs or number of subunits and is more robust to signer behaviour variation. Furthermore, it correlates highly with the definition of syllables in sign language while sharing characteristics of syllables in spoken languages. A set of comprehensive experiments on real-world signing videos demonstrates the effectiveness of the proposed model.