Unpacking "privacy" for a networked world
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hypotheticals as heuristic device
HLT '86 Proceedings of the workshop on Strategic computing natural language
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Financial incentives and the "performance of crowds"
Proceedings of the ACM SIGKDD Workshop on Human Computation
Are your participants gaming the system?: screening mechanical turk workers
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Who are the crowdworkers?: shifting demographics in mechanical turk
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Ethics and tactics of professional crowdwork
XRDS: Crossroads, The ACM Magazine for Students - Comp-YOU-Ter
Social media ownership: using twitter as a window onto current attitudes and beliefs
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The ownership and reuse of visual media
Proceedings of the 11th annual international ACM/IEEE joint conference on Digital libraries
Instrumenting the crowd: using implicit behavioral measures to predict task performance
Proceedings of the 24th annual ACM symposium on User interface software and technology
ICALP'06 Proceedings of the 33rd international conference on Automata, Languages and Programming - Volume Part II
Shepherding the crowd yields better work
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Communications of the ACM
Lost in translation: understanding the possession of digital things in the cloud
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
On the institutional archiving of social media
Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries
Implementing crowdsourcing-based relevance experimentation: an industrial perspective
Information Retrieval
Are user-contributed reviews community property?: exploring the beliefs and practices of reviewers
Proceedings of the 5th Annual ACM Web Science Conference
Saving, reusing, and remixing web video: using attitudes and practices to reveal social norms
Proceedings of the 22nd international conference on World Wide Web
Hi-index | 0.00 |
Crowdsourcing services such as Amazon's Mechanical Turk (MTurk) provide new venues for recruiting participants and conducting studies; hundreds of surveys may be offered to workers at any given time. We reflect on the results of six related studies we performed on MTurk over a two year period. The studies used a combination of open-ended questions and structured hypothetical statements about story-like scenarios to engage the efforts of 1252 participants. We describe the method used in the studies and reflect on what we have learned about identified best practices. We analyze the aggregated data to profile the types of Turkers who take surveys and examine how the characteristics of the surveys may influence data reliability. The results point to the value of participant engagement, identify potential changes in MTurk as a study venue, and highlight how communication among Turkers influences the data that researchers collect.