How Good Are Humans at Solving CAPTCHAs? A Large Scale Evaluation

  • Authors:
  • Elie Bursztein;Steven Bethard;Celine Fabry;John C. Mitchell;Dan Jurafsky

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • SP '10 Proceedings of the 2010 IEEE Symposium on Security and Privacy
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Captchas are designed to be easy for humans but hard for machines. However, most recent research has focused only on making them hard for machines. In this paper, we present what is to the best of our knowledge the first large scale evaluation of captchas from the human perspective, with the goal of assessing how much friction captchas present to the average user. For the purpose of this study we have asked workers from Amazon’s Mechanical Turk and an underground captchabreaking service to solve more than 318 000 captchas issued from the 21 most popular captcha schemes (13 images schemes and 8 audio scheme). Analysis of the resulting data reveals that captchas are often difficult for humans, with audio captchas being particularly problematic. We also find some demographic trends indicating, for example, that non-native speakers of English are slower in general and less accurate on English-centric captcha schemes. Evidence from a week’s worth of eBay captchas (14,000,000 samples) suggests that the solving accuracies found in our study are close to real-world values, and that improving audio captchas should become a priority, as nearly 1% of all captchas are delivered as audio rather than images. Finally our study also reveals that it is more effective for an attacker to use Mechanical Turk to solve captchas than an underground service.