Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Fast, cheap, and creative: evaluating translation quality using Amazon's Mechanical Turk
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Technically Speaking: It's a Wiki, Wiki World
IEEE Spectrum
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Sellers' problems in human computation markets
Proceedings of the ACM SIGKDD Workshop on Human Computation
Exploring the use of crowdsourcing to support empirical studies in software engineering
Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement
Designing incentives for inexpert human raters
Proceedings of the ACM 2011 conference on Computer supported cooperative work
Social media ownership: using twitter as a window onto current attitudes and beliefs
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
What's in a move?: normal disruption and a design challenge
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The ownership and reuse of visual media
Proceedings of the 11th annual international ACM/IEEE joint conference on Digital libraries
Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Personifying programming tool feedback improves novice programmers' learning
Proceedings of the seventh international workshop on Computing education research
Perceptual models of viewpoint preference
ACM Transactions on Graphics (TOG)
Living in a glass house: a survey of private moments in the home
Proceedings of the 13th international conference on Ubiquitous computing
Evaluating commonsense knowledge with a computer game
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part I
Instrumenting the crowd: using implicit behavioral measures to predict task performance
Proceedings of the 24th annual ACM symposium on User interface software and technology
Worker types and personality traits in crowdsourcing relevance labels
Proceedings of the 20th ACM international conference on Information and knowledge management
"I regretted the minute I pressed share": a qualitative study of regrets on Facebook
Proceedings of the Seventh Symposium on Usable Privacy and Security
LemonAid: selection-based crowdsourced contextual help for web applications
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
On the institutional archiving of social media
Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries
Putting humans in the loop: Social computing for Water Resources Management
Environmental Modelling & Software
Correct horse battery staple: exploring the usability of system-assigned passphrases
Proceedings of the Eighth Symposium on Usable Privacy and Security
How does your password measure up? the effect of strength meters on password creation
Security'12 Proceedings of the 21st USENIX conference on Security symposium
CrowdScape: interactively visualizing user behavior and output
Proceedings of the 25th annual ACM symposium on User interface software and technology
Proceedings of the 2012 ACM conference on Computer and communications security
Workflow transparency in a microtask marketplace
Proceedings of the 17th ACM international conference on Supporting group work
Crowdsourcing micro-level multimedia annotations: the challenges of evaluation and interface
Proceedings of the ACM multimedia 2012 workshop on Crowdsourcing for multimedia
Proceedings of the 21st ACM international conference on Information and knowledge management
Online real-time presentation of virtual experiences forexternal viewers
Proceedings of the 18th ACM symposium on Virtual reality software and technology
Co-worker transparency in a microtask marketplace
Proceedings of the 2013 conference on Computer supported cooperative work
Proceedings of the 2013 conference on Computer supported cooperative work
Patterns for visualization evaluation
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
How to filter out random clickers in a crowdsourcing-based study?
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
An analysis of human factors and label accuracy in crowdsourcing relevance judgments
Information Retrieval
Identifying top news using crowdsourcing
Information Retrieval
Are user-contributed reviews community property?: exploring the beliefs and practices of reviewers
Proceedings of the 5th Annual ACM Web Science Conference
Experiences surveying the crowd: reflections on methods, participation, and reliability
Proceedings of the 5th Annual ACM Web Science Conference
News vertical search: when and what to display to users
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Saving, reusing, and remixing web video: using attitudes and practices to reveal social norms
Proceedings of the 22nd international conference on World Wide Web
Retrospective privacy: managing longitudinal privacy in online social networks
Proceedings of the Ninth Symposium on Usable Privacy and Security
Your attention please: designing security-decision UIs to make genuine risks harder to ignore
Proceedings of the Ninth Symposium on Usable Privacy and Security
CrowdLearner: rapidly creating mobile recognizers using crowdsourcing
Proceedings of the 26th annual ACM symposium on User interface software and technology
User intent and assessor disagreement in web search evaluation
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Proceedings of the 2013 ACM workshop on Digital identity management
The motivations and experiences of the on-demand mobile workforce
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Proceedings of the 19th international conference on Intelligent User Interfaces
Hi-index | 0.01 |
In this paper we discuss a screening process used in conjunction with a survey administered via Amazon.com's Mechanical Turk. We sought an easily implementable method to disqualify those people who participate but don't take the study tasks seriously. By using two previously pilot tested screening questions, we identified 764 of 1,962 people who did not answer conscientiously. Young men seem to be most likely to fail the qualification task. Those that are professionals, students, and non-workers seem to be more likely to take the task seriously than financial workers, hourly workers, and other workers. Men over 30 and women were more likely to answer seriously.