Pushing the limits of mechanical turk: qualifying the crowd for video geo-location

  • Authors:
  • Luke Gottlieb;Jaeyoung Choi;Pascal Kelm;Thomas Sikora;Gerald Friedland

  • Affiliations:
  • International Computer Science Institute, Berkeley, CA, USA;International Computer Science Institute, Berkeley, CA, USA;Technische Universitat Berlin, Berlin, Germany;Technische Universitat Berlin, Berlin, Germany;International Computer Science Institute, Berkeley, CA, USA

  • Venue:
  • Proceedings of the ACM multimedia 2012 workshop on Crowdsourcing for multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this article we review the methods we have developed for finding Mechanical Turk participants for the manual annotation of the geo-location of random videos from the web. We require high quality annotations for this project, as we are attempting to establish a human baseline for future comparison to machine systems. This task is different from a standard Mechanical Turk task in that it is difficult for both humans and machines, whereas a standard Mechanical Turk task is usually easy for humans and difficult or impossible for machines. This article discusses the varied difficulties we encountered while qualifying annotators and the steps that we took to select the individuals most likely to do well at our annotation task in the future.