LabelMe: A Database and Web-Based Tool for Image Annotation
International Journal of Computer Vision
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Analyzing the Amazon Mechanical Turk marketplace
XRDS: Crossroads, The ACM Magazine for Students - Comp-YOU-Ter
Crowdsourcing and human computation: systems, studies and platforms
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Automatic tagging and geotagging in video collections and communities
Proceedings of the 1st ACM International Conference on Multimedia Retrieval
Human vs machine: establishing a human baseline for multimodal location estimation
Proceedings of the 21st ACM international conference on Multimedia
Assessing internet video quality using crowdsourcing
Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia
Hi-index | 0.00 |
In this article we review the methods we have developed for finding Mechanical Turk participants for the manual annotation of the geo-location of random videos from the web. We require high quality annotations for this project, as we are attempting to establish a human baseline for future comparison to machine systems. This task is different from a standard Mechanical Turk task in that it is difficult for both humans and machines, whereas a standard Mechanical Turk task is usually easy for humans and difficult or impossible for machines. This article discusses the varied difficulties we encountered while qualifying annotators and the steps that we took to select the individuals most likely to do well at our annotation task in the future.