Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Relevance assessment: are judges exchangeable and does it matter
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Crowdsourcing for relevance evaluation
ACM SIGIR Forum
Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business
Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business
Financial incentives and the "performance of crowds"
Proceedings of the ACM SIGKDD Workshop on Human Computation
Crowdsourcing document relevance assessment with Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Overview of the INEX 2009 book track
INEX'09 Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval
Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Worker types and personality traits in crowdsourcing relevance labels
Proceedings of the 20th ACM international conference on Information and knowledge management
Towards an integrated crowdsourcing definition
Journal of Information Science
Proceedings of the 21st international conference on World Wide Web
Quality through flow and immersion: gamifying crowdsourced relevance assessments
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Social book search: comparing topical relevance judgements and book suggestions for evaluation
Proceedings of the 21st ACM international conference on Information and knowledge management
Computing similarity between items in a digital library of cultural heritage
Journal on Computing and Cultural Heritage (JOCCH)
Personalizing atypical web search sessions
Proceedings of the sixth ACM international conference on Web search and data mining
Phrase detectives: Utilizing collective intelligence for internet-scale language resource creation
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special section on internet-scale human problem solving and regular papers
Exploiting user comments for audio-visual content indexing and retrieval
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
An evaluation of labelling-game data for video retrieval
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Influence of timeline and named-entity components on user engagement
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Crowdsourcing for information retrieval: introduction to the special issue
Information Retrieval
Increasing cheat robustness of crowdsourcing tasks
Information Retrieval
An analysis of human factors and label accuracy in crowdsourcing relevance judgments
Information Retrieval
Implementing crowdsourcing-based relevance experimentation: an industrial perspective
Information Retrieval
Pick-a-crowd: tell me what you like, and i'll tell you what to do
Proceedings of the 22nd international conference on World Wide Web
Choices in batch information retrieval evaluation
Proceedings of the 18th Australasian Document Computing Symposium
Accurate integration of crowdsourced labels using workers' self-reported confidence scores
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Large-scale linked data integration using probabilistic reasoning and crowdsourcing
The VLDB Journal — The International Journal on Very Large Data Bases
Expert Systems with Applications: An International Journal
Community-based bayesian aggregation models for crowdsourcing
Proceedings of the 23rd international conference on World wide web
Hi-index | 0.00 |
Crowdsourcing is increasingly looked upon as a feasible alternative to traditional methods of gathering relevance labels for the evaluation of search engines, offering a solution to the scalability problem that hinders traditional approaches. However, crowdsourcing raises a range of questions regarding the quality of the resulting data. What indeed can be said about the quality of the data that is contributed by anonymous workers who are only paid cents for their efforts? Can higher pay guarantee better quality? Do better qualified workers produce higher quality labels? In this paper, we investigate these and similar questions via a series of controlled crowdsourcing experiments where we vary pay, required effort and worker qualifications and observe their effects on the resulting label quality, measured based on agreement with a gold set.