Identifying Sign Language Videos in Video Sharing Sites

  • Authors:
  • Frank M. Shipman;Ricardo Gutierrez-Osuna;Caio D. D. Monteiro

  • Affiliations:
  • Texas A&M University;Texas A&M University;Texas A&M University

  • Venue:
  • ACM Transactions on Accessible Computing (TACCESS)
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Video sharing sites enable members of the sign language community to record and share their knowledge, opinions, and worries on a wide range of topics. As a result, these sites have formative digital libraries of sign language content hidden within their large overall collections. This article explores the problem of locating these sign language (SL) videos and presents techniques for identifying SL videos in such collections. To determine the effectiveness of existing text-based search for locating these SL videos, a series of queries were issued to YouTube to locate SL videos on the top 10 news stories of 2011 according to Yahoo!. Overall precision for the first page of results (up to 20 results) was 42%. An approach for automatically detecting SL video is then presented. Five video features considered likely to be of value were developed using standard background modeling and face detection. The article compares the results of an SVM classifier when given all permutations of these five features. The results show that a measure of the symmetry of motion relative to the face position provided the best performance of any single feature. When tested against a challenging test collection that included many likely false positives, an SVM provided with all five features achieved 82% precision and 90% recall. In contrast, the text-based search (queries with the topic terms and “ASL” or “sign language”) returned a significant portion of non-SL content---nearly half of all videos found. By our estimates, the application of video-based filtering techniques such as the one proposed here would increase precision from 42% for text-based queries up to 75%.