Comparing native signers' perception of American Sign Language animations and videos via eye tracking

  • Authors:
  • Hernisa Kacorri;Allen Harper;Matt Huenerfauth

  • Affiliations:
  • The City University of New York, New York, NY;The City University of New York, New York, NY;The City University of New York, New York, NY

  • Venue:
  • Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Animations of American Sign Language (ASL) have accessibility benefits for signers with lower written-language literacy. Our lab has conducted prior evaluations of synthesized ASL animations: asking native signers to watch different versions of animations and answer comprehension and subjective questions about them. Seeking an alternative method of measuring users' reactions to animations, we are now investigating the use of eye tracking to understand how users perceive our stimuli. This study quantifies how the eye gaze of native signers varies when they view: videos of a human ASL signer or synthesized animations of ASL (of different levels of quality). We found that, when viewing videos, signers spend more time looking at the face and less frequently move their gaze between the face and body of the signer. We also found correlations between these two eye-tracking metrics and participants' responses to subjective evaluations of animation-quality. This paper provides methodological guidance for how to design user studies evaluating sign language animations that include eye tracking, and it suggests how certain eye-tracking metrics could be used as an alternative or complimentary form of measurement in evaluation studies of sign language animation.