Slash(dot) and burn: distributed moderation in a large online conversation space
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
TurKit: tools for iterative tasks on mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
OCSC'07 Proceedings of the 2nd international conference on Online communities and social computing
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Soylent: a word processor with a crowd inside
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Crowds in two seconds: enabling realtime crowd-powered interfaces
Proceedings of the 24th annual ACM symposium on User interface software and technology
AutoMan: a platform for integrating human-based and digital computation
Proceedings of the ACM international conference on Object oriented programming systems languages and applications
Pay by the bit: an information-theoretic metric for collective human judgment
Proceedings of the 2013 conference on Computer supported cooperative work
An internet-scale idea generation system
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special section on internet-scale human problem solving and regular papers
Keep it simple: reward and task design in crowdsourcing
Proceedings of the Biannual Conference of the Italian Chapter of SIGCHI
A comparison of social, learning, and financial strategies on crowd engagement and output quality
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Hi-index | 0.00 |
Micro-task platforms provide a marketplace for hiring people to do short-term work for small payments. Requesters often struggle to obtain high-quality results, especially on content-creation tasks, because work cannot be easily verified and workers can move to other tasks without consequence. Such platforms provide little opportunity for workers to reflect and improve their task performance. Timely and task-specific feedback can help crowd workers learn, persist, and produce better results. We analyze the design space for crowd feedback and introduce Shepherd, a prototype system for visualizing crowd work, providing feedback, and promoting workers into shepherding roles. This paper describes our current progress and our plans for system development and evaluation.