The Cranfield tests on index language devices
Readings in information retrieval
The Wisdom of Crowds
When Computers Were Human
Minimal test collections for retrieval evaluation
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
A statistical method for system evaluation using incomplete judgments
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A simple and efficient sampling method for estimating AP and NDCG
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Crowdsourcing for relevance evaluation
ACM SIGIR Forum
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Who are the crowdworkers?: shifting demographics in mechanical turk
CHI '10 Extended Abstracts on Human Factors in Computing Systems
CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones
Proceedings of the 8th international conference on Mobile systems, applications, and services
Ethics and tactics of professional crowdwork
XRDS: Crossroads, The ACM Magazine for Students - Comp-YOU-Ter
Crowdsourcing for search evaluation
ACM SIGIR Forum
Crowdsourcing for search and data mining
Proceedings of the fourth ACM international conference on Web search and data mining
Crowdsourcing 101: putting the WSDM of crowds to work for you
Proceedings of the fourth ACM international conference on Web search and data mining
The computer is the new sewing machine: benefits and perils of crowdsourcing
Proceedings of the 20th international conference companion on World wide web
Human computation: a survey and taxonomy of a growing field
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing for search and data mining
ACM SIGIR Forum
In search of quality in crowdsourcing for search engine evaluation
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
Amazon mechanical turk: Gold mine or coal mine?
Computational Linguistics
Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Crowdsourcing for information retrieval: principles, methods, and applications
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Human Computation
Worker types and personality traits in crowdsourcing relevance labels
Proceedings of the 20th ACM international conference on Information and knowledge management
Crowdsourcing for information retrieval
ACM SIGIR Forum
Quality through flow and immersion: gamifying crowdsourced relevance assessments
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
CrowdTerrier: automatic crowdsourced relevance assessments with terrier
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Crowdsourcing for search evaluation and social-algorithmic search
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Using crowdsourcing for TREC relevance assessment
Information Processing and Management: an International Journal
Proceedings of the 21st ACM international conference on Information and knowledge management
Obtaining High-Quality Relevance Judgments Using Crowdsourcing
IEEE Internet Computing
MobileWorks: Designing for Quality in a Managed Crowdsourcing Architecture
IEEE Internet Computing
Proceedings of the 2013 conference on Computer supported cooperative work
Crowdsourcing and the crisis-affected community
Information Retrieval
Increasing cheat robustness of crowdsourcing tasks
Information Retrieval
An analysis of human factors and label accuracy in crowdsourcing relevance judgments
Information Retrieval
Identifying top news using crowdsourcing
Information Retrieval
Implementing crowdsourcing-based relevance experimentation: an industrial perspective
Information Retrieval
Turkopticon: interrupting worker invisibility in amazon mechanical turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
This introduction to the special issue summarizes and contextualizes six novel research contributions at the intersection of information retrieval (IR) and crowdsourcing (also overlapping crowdsourcing's closely-related sibling, human computation). Several of the papers included in this special issue represent deeper investigations into research topics for which earlier stages of the authors' research were disseminated at crowdsourcing workshops at SIGIR and WSDM conferences, as well as at the NIST TREC conference. Since the first proposed use of crowdsourcing for IR in 2008, interest in this area has quickly accelerated and led to three workshops, an ongoing NIST TREC track, and a great variety of published papers, talks, and tutorials. We briefly summarize the area in order to help situate the contributions appearing in this special issue. We also discuss some broader current trends and issues in crowdsourcing which bear upon its use in IR and other fields.