Enabling access through real-time sign language communication over cell phones

  • Authors:
  • Jaehong Chon;Neva Cherniavsky;Eve A. Riskin;Richard E. Ladner

  • Affiliations:
  • Department of Electrical Engineering, University of Washington, Seattle, WA;Department of Computer Science and Engineering, University of Washington, Seattle, WA;Department of Electrical Engineering, University of Washington, Seattle, WA;Department of Computer Science and Engineering, University of Washington, Seattle, WA

  • Venue:
  • Asilomar'09 Proceedings of the 43rd Asilomar conference on Signals, systems and computers
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The primary challenge to enabling real-time twoway video conferencing on a cell phone is overcoming the limited bandwidth, computation and power. The goal of the MobileASL project is to enable access for people who use American Sign Language (ASL) to an off-the-shelf mobile phone through the implementation of real-time mobile video communication. The enhancement of processor, bandwidth, and power efficiency is investigated through SIMD optimization; region-of-interest encoding based on skin detection; video resolution selection (used to determine the best trade off between frame rate and spatial resolution); and variable frame rates based on activity recognition. Our prototype system is able to compress, transmit, and decode 12-15 frames per second in real-time and produce intelligible ASL at 30 kbps. Furthermore, we can achieve up to 23 extra minutes of talk time, or a 8% gain over the battery life of the phone, through our frame dropping technique.