Do you see what I see: crowdsource annotation of captured scenes

  • Authors:
  • J. Aaron Hipp;Deepti Adlakha;Rebecca Gernes;Agata Kargol;Robert Pless

  • Affiliations:
  • Washington University in St. Louis, St. Louis, MO;Washington University in St. Louis, St. Louis, MO;Washington University in St. Louis, St. Louis, MO;Washington University in St. Louis, St. Louis, MO;Washington University in St. Louis, St. Louis, MO

  • Venue:
  • Proceedings of the 4th International SenseCam & Pervasive Imaging Conference
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Archive of Many Outdoor Scenes has captured 400 million images. Many of these cameras and images are of street intersections, a subset of which has experienced built environment improvements during the past seven years. We identified six cameras in Washington, DC, and uploaded 120 images from each before a built environment change (2007) and after (2010) to the crowdsourcing website Amazon Mechanical Turk (n=1,440). Five unique MTurk workers annotated each image, counting the number of pedestrians, cyclists, and vehicles. Two trained Research Assistants completed the same tasks. Reliability and validity statistics of MTurk workers revealed substantial agreement in annotating captured images of pedestrians and vehicles. Using the mean annotation of four MTurk workers proved most parsimonious for valid results. Crowdsourcing was shown to be a reliable and valid workforce for annotating images of outdoor human behavior.