Tracking benchmark databases for video-based sign language recognition

  • Authors:
  • Philippe Dreuw;Jens Forster;Hermann Ney

  • Affiliations:
  • Human Language Technology and Pattern Recognition Group, RWTH Aachen University, Aachen, Germany;Human Language Technology and Pattern Recognition Group, RWTH Aachen University, Aachen, Germany;Human Language Technology and Pattern Recognition Group, RWTH Aachen University, Aachen, Germany

  • Venue:
  • ECCV'10 Proceedings of the 11th European conference on Trends and Topics in Computer Vision - Volume Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

A survey of video databases that can be used within a continuous sign language recognition scenario to measure the performance of head and hand tracking algorithms either w.r.t. a tracking error rate or w.r.t. a word error rate criterion is presented in this work. Robust tracking algorithms are required as the signing hand frequently moves in front of the face, may temporarily disappear, or cross the other hand. Only few studies consider the recognition of continuous sign language, and usually special devices such as colored gloves or blue-boxing environments are used to accurately track the regions-of-interest in sign language processing. Ground-truth labels for hand and head positions have been annotated for more than 30k frames in several publicly available video databases of different degrees of difficulty, and preliminary tracking results are presented.