Computer
Human computation
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Games with a Purpose for the Semantic Web
IEEE Intelligent Systems
Financial incentives and the "performance of crowds"
Proceedings of the ACM SIGKDD Workshop on Human Computation
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Crowdsourcing the assembly of concept hierarchies
Proceedings of the 10th annual joint conference on Digital libraries
Human computation: a survey and taxonomy of a growing field
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Amazon mechanical turk: Gold mine or coal mine?
Computational Linguistics
RISQ! Renowned Individuals Semantic Quiz: a Jeopardy like quiz game for ranking facts
Proceedings of the 7th International Conference on Semantic Systems
Active learning with Amazon Mechanical Turk
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Crowdsourcing research opportunities: lessons from natural language processing
Proceedings of the 12th International Conference on Knowledge Management and Knowledge Technologies
Climate quiz: a web application for eliciting and validating knowledge from social networks
Proceedings of the 18th Brazilian symposium on Multimedia and the web
An Experiment in Comparing Human-Computation Techniques
IEEE Internet Computing
Perspectives on crowdsourcing annotations for natural language processing
Language Resources and Evaluation
Phrase detectives: Utilizing collective intelligence for internet-scale language resource creation
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special section on internet-scale human problem solving and regular papers
Hi-index | 0.00 |
Mechanised labour and games with a purpose are the two most popular human computation genres, frequently employed to support research activities in fields as diverse as natural language processing, semantic web or databases. Research projects typically rely on either one or the other of these genres, and therefore there is a general lack of understanding of how these two genres compare and whether and how they could be used together to offset their respective weaknesses. This paper addresses these open questions. It first identifies the differences between the two genres, primarily in terms of cost, speed and result quality, based on existing studies in the literature. Secondly, it reports on a comparative study which involves performing the same task through both genres and comparing the results. The study's findings demonstrate that the two genres are highly complementary, which not only makes them suitable for different types of projects, but also opens new opportunities for building cross-genre human computation solutions that exploit the strengths of both genres simultaneously.