The SoundsRight CAPTCHA: an improved approach to audio human interaction proofs for blind users

  • Authors:
  • Jonathan Lazar;Jinjuan Feng;Tim Brooks;Genna Melamed;Brian Wentz;Jon Holman;Abiodun Olalere;Nnanna Ekedebe

  • Affiliations:
  • Towson University, Towson, Maryland, United States;Towson University, Towson, Maryland, United States;Towson University, Towson, Maryland, United States;Towson University, Towson, Maryland, United States;Frostburg State University, Frostburg, Maryland, United States;Towson University, Towson, Maryland, United States;Towson University, Towson, Maryland, United States;Towson University, Towson, Maryland, United States

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper we describe the development of a new audio CAPTCHA called the SoundsRight CAPTCHA, and the evaluation of the CAPTCHA with 20 blind users. Blind users cannot use visual CAPTCHAs, and it has been documented in the research literature that the existing audio CAPTCHAs have task success rates below 50% for blind users. The SoundsRight audio CAPTCHA presents a real-time audio-based challenge in which the user is asked to identify a specific sound (for example the sound of a bell or a piano) each time it occurs in a series of 10 sounds that are played through the computer's audio system. Evaluation results from three rounds of usability testing document that the task success rate was higher than 90% for blind users. Discussion, limitations, and suggestions for future research are also presented.