Aligning ASL for Statistical Translation Using a Discriminative Word Model

  • Authors:
  • Ali Farhadi;David Forsyth

  • Affiliations:
  • University of Illinois at Urbana-Champai;University of Illinois at Urbana-Champai

  • Venue:
  • CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a method to align ASL video subtitles with a closed-caption transcript. Our alignments are partial, based on spotting words within the video sequence, which consists of joined (rather than isolated) signs with unknown word boundaries. We start with windows known to contain an example of a word, but not limited to it. We estimate the start and end of the word in these examples using a voting method. This provides a small number of training examples (typically three per word). Since there is no shared structure, we use a discriminative rather than a generative word model. While our word spotters are not perfect, they are sufficient to establish an alignment. We demonstrate that quite small numbers of good word spotters results in an alignment good enough to produce simple English-ASL translations, both by phrase matching and using word substitution.