The Philosophy of Information Retrieval Evaluation
CLEF '01 Revised Papers from the Second Workshop of the Cross-Language Evaluation Forum on Evaluation of Cross-Language Information Retrieval Systems
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Retrieval evaluation with incomplete information
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing)
Peekaboom: a game for locating objects in images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Verbosity: a game for collecting common-sense facts
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Minimal test collections for retrieval evaluation
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Designing games with a purpose
Communications of the ACM - Designing games with a purpose
Crowdsourcing for relevance evaluation
ACM SIGIR Forum
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Improving search engines using human computation games
Proceedings of the 18th ACM conference on Information and knowledge management
Crowdsourcing document relevance assessment with Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Crowdsourcing for search evaluation
ACM SIGIR Forum
Crowdsourcing 101: putting the WSDM of crowds to work for you
Proceedings of the fourth ACM international conference on Web search and data mining
Reality Is Broken: Why Games Make Us Better and How They Can Change the World
Reality Is Broken: Why Games Make Us Better and How They Can Change the World
Crowdsourcing using Mechanical Turk: quality management and scalability
Proceedings of the 8th International Workshop on Information Integration on the Web: in conjunction with WWW 2011
Design and implementation of relevance assessments using crowdsourcing
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
In search of quality in crowdsourcing for search engine evaluation
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
The ownership and reuse of visual media
Proceedings of the 11th annual international ACM/IEEE joint conference on Digital libraries
Quantifying test collection quality based on the consistency of relevance judgements
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Crowdsourcing for information retrieval
ACM SIGIR Forum
VisualizIR: a game for identifying and categorizing relevant text in documents
Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design
Exploiting user comments for audio-visual content indexing and retrieval
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
An evaluation of labelling-game data for video retrieval
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Crowdsourcing for information retrieval: introduction to the special issue
Information Retrieval
Competing or aiming to be average?: normification as a means of engaging digital volunteers
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Hi-index | 0.00 |
Crowdsourcing is a market of steadily-growing importance upon which both academia and industry increasingly rely. However, this market appears to be inherently infested with a significant share of malicious workers who try to maximise their profits through cheating or sloppiness. This serves to undermine the very merits crowdsourcing has come to represent. Based on previous experience as well as psychological insights, we propose the use of a game in order to attract and retain a larger share of reliable workers to frequently-requested crowdsourcing tasks such as relevance assessments and clustering. In a large-scale comparative study conducted using recent TREC data, we investigate the performance of traditional HIT designs and a game-based alternative that is able to achieve high quality at significantly lower pay rates, facing fewer malicious submissions.