Internet-scale collection of human-reviewed data
Proceedings of the 16th international conference on World Wide Web
Designing games with a purpose
Communications of the ACM - Designing games with a purpose
Policy teaching through reward function learning
Proceedings of the 10th ACM conference on Electronic commerce
TurKit: tools for iterative tasks on mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Financial incentives and the "performance of crowds"
Proceedings of the ACM SIGKDD Workshop on Human Computation
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Value-based policy teaching with active indirect elicitation
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 1
A general approach to environment design with one agent
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Turkomatic: automatic recursive task and workflow design for mechanical turk
CHI '11 Extended Abstracts on Human Factors in Computing Systems
CrowdDB: answering queries with crowdsourcing
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Multiagent environment design in human computation
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Instrumenting the crowd: using implicit behavioral measures to predict task performance
Proceedings of the 24th annual ACM symposium on User interface software and technology
Collaboratively crowdsourcing workflows with turkomatic
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Micro perceptual human computation for visual tasks
ACM Transactions on Graphics (TOG)
Putting humans in the loop: Social computing for Water Resources Management
Environmental Modelling & Software
Enhancing reliability using peer consistency evaluation in human computation
Proceedings of the 2013 conference on Computer supported cooperative work
CrowdUtility: know the crowd that works for you
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Form digitization in BPO: from outsourcing to crowdsourcing?
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
A central challenge in human computation is in understanding how to design task environments that effectively attract participants and coordinate the problem solving process. In this paper, we consider a common problem that requesters face on Amazon Mechanical Turk: how should a task be designed so as to induce good output from workers? In posting a task, a requester decides how to break down the task into unit tasks, how much to pay for each unit task, and how many workers to assign to a unit task. These design decisions affect the rate at which workers complete unit tasks, as well as the quality of the work that results. Using image labeling as an example task, we consider the problem of designing the task to maximize the number of quality tags received within given time and budget constraints. We consider two different measures of work quality, and construct models for predicting the rate and quality of work based on observations of output to various designs. Preliminary results show that simple models can accurately predict the quality of output per unit task, but are less accurate in predicting the rate at which unit tasks complete. At a fixed rate of pay, our models generate different designs depending on the quality metric, and optimized designs obtain significantly more quality tags than baseline comparisons.