Follow that sound: using sonification and corrective verbal feedback to teach touchscreen gestures

  • Authors:
  • Uran Oh;Shaun K. Kane;Leah Findlater

  • Affiliations:
  • University of Maryland, College Park, MD;University of Maryland, Baltimore County (UMBC);University of Maryland, College Park, MD

  • Venue:
  • Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

While sighted users may learn to perform touchscreen gestures through observation (e.g., of other users or video tutorials), such mechanisms are inaccessible for users with visual impairments. As a result, learning to perform gestures can be challenging. We propose and evaluate two techniques to teach touchscreen gestures to users with visual impairments: (1) corrective verbal feedback using text-to-speech and automatic analysis of the user's drawn gesture; (2) gesture sonification to generate sound based on finger touches, creating an audio representation of a gesture. To refine and evaluate the techniques, we conducted two controlled lab studies. The first study, with 12 sighted participants, compared parameters for sonifying gestures in an eyes-free scenario and identified pitch + stereo panning as the best combination. In the second study, 6 blind and low-vision participants completed gesture replication tasks with the two feedback techniques. Subjective data and preliminary performance findings indicate that the techniques offer complementary advantages.