Segmentation of the face and hands in sign language video sequences using color and motion cues

  • Authors:
  • N. Habili;Cheng Chew Lim;A. Moini

  • Affiliations:
  • iOmniscient Pty Ltd, Chatswood, NSW, Australia;-;-

  • Venue:
  • IEEE Transactions on Circuits and Systems for Video Technology
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a hand and face segmentation methodology using color and motion cues for the content-based representation of sign language video sequences. The methodology consists of three stages: skin-color segmentation; change detection; face and hand segmentation mask generation. In skin-color segmentation, a universal color-model is derived and image pixels are classified as skin or nonskin based on their Mahalanobis distance. We derive a segmentation threshold for the classifier. The aim of change detection is to localize moving objects in a video sequences. The change detection technique is based on the F test and block-based motion estimation. Finally, the results from skin-color segmentation and change detection are analyzed to segment the face and hands. The performance of the algorithm is illustrated by simulations carried out on standard test sequences.