Verbosity: a game for collecting common-sense facts
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Computer
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Games with a Purpose for the Semantic Web
IEEE Intelligent Systems
Freebase: a collaboratively created graph database for structuring human knowledge
Proceedings of the 2008 ACM SIGMOD international conference on Management of data
Designing games with a purpose
Communications of the ACM - Designing games with a purpose
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Ethics and tactics of professional crowdwork
XRDS: Crossroads, The ACM Magazine for Students - Comp-YOU-Ter
Pay by the bit: an information-theoretic metric for collective human judgment
Proceedings of the 2013 conference on Computer supported cooperative work
Proceedings of the 2013 conference on Computer supported cooperative work
Proceedings of the 22nd international conference on World Wide Web
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Trust, but verify: predicting contribution quality for knowledge base construction and curation
Proceedings of the 7th ACM international conference on Web search and data mining
Crowdsourced mobile data collection: lessons learned from a new study methodology
Proceedings of the 15th Workshop on Mobile Computing Systems and Applications
Hi-index | 0.00 |
In this paper we describe Rabj, an engine designed to simplify collecting human input. We have used Rabj to collect over 2.3 million human judgments to augment data mining, data entry, and curation tasks at Freebase over the course of a year. We illustrate several successful applications that have used Rabj to collect human judgment. We describe how the architecture and design decisions of Rabj are affected by the constraints of content agnosticity, data freshness, latency and visibility. We present work aimed at increasing the yield and reliability of human computation efforts. Finally, we discuss empirical observations and lessons learned in the course of a year of operating the service.