Facial movement analysis in ASL

  • Authors:
  • Christian Vogler;Siome Goldenstein

  • Affiliations:
  • Gallaudet University, Gallaudet Research Institute, 800 Florida Ave. NE, 20002-3695, Washington, DC, USA;Universidade Estadual de Campinas, Instituto de Computação, Caixa Postal 6176, 13084-971, Campinas, SP, Brazil

  • Venue:
  • Universal Access in the Information Society
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the age of speech and voice recognition technologies, sign language recognition is an essential part of ensuring equal access for deaf people. To date, sign language recognition research has mostly ignored facial expressions that arise as part of a natural sign language discourse, even though they carry important grammatical and prosodic information. One reason is that tracking the motion and dynamics of expressions in human faces from video is a hard task, especially with the high number of occlusions from the signers’ hands. This paper presents a 3D deformable model tracking system to address this problem, and applies it to sequences of native signers, taken from the National Center of Sign Language and Gesture Resources (NCSLGR), with a special emphasis on outlier rejection methods to handle occlusions. The experiments conducted in this paper validate the output of the face tracker against expert human annotations of the NCSLGR corpus, demonstrate the promise of the proposed face tracking framework for sign language data, and reveal that the tracking framework picks up properties that ideally complement human annotations for linguistic research.