SeeSay and HearSay CAPTCHA for mobile interaction

  • Authors:
  • Sajad Shirali-Shahreza;Gerald Penn;Ravin Balakrishnan;Yashar Ganjali

  • Affiliations:
  • University of Toronto, Toronto, Ontario, Canada;University of Toronto, Toronto, Ontario, Canada;University of Toronto, Toronto, Ontario, Canada;University of Toronto, Toronto, Ontario, Canada

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

Speech certainly has advantages as an input modality for smartphone applications, especially in scenarios where using touch or keyboard entry is difficult, on increasingly miniaturized devices where useable keyboards are difficult to accommodate, or in scenarios where only small amounts of text need to be input, such as when entering SMS texts or responding to a CAPTCHA challenge. In this paper, we propose two new alternative ways to design CAPTCHAs in which the user says the answer instead of typing it with (a) output stimuli provided visually (SeeSay) or (b) auditorily (HearSay). Our user study results show that SeeSay CAPTCHA requires less time to be solved and users prefer it over current text-based CAPTCHA methods.