The Journal of Machine Learning Research
A crowdsourcing based mobile image translation and knowledge sharing service
Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia
Affective computational priming and creativity
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Human computation: a survey and taxonomy of a growing field
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Utility of human-computer interactions: toward a science of preference measurement
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Shepherding the crowd: managing and providing feedback to crowd workers
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Who moderates the moderators?: crowdsourcing abuse detection in user-generated content
Proceedings of the 12th ACM conference on Electronic commerce
User reputation in a comment rating environment
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Instrumenting the crowd: using implicit behavioral measures to predict task performance
Proceedings of the 24th annual ACM symposium on User interface software and technology
Proceedings of the VLDB Endowment
Quality assurance in document conversion: a hit?
Proceedings of the 4th ACM workshop on Online books, complementary social media and crowdsourcing
Shepherding the crowd yields better work
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Discriminating gender on Twitter
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Active learning with Amazon Mechanical Turk
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Eliminating spammers and ranking annotators for crowdsourced labeling tasks
The Journal of Machine Learning Research
So who won?: dynamic max discovery with the crowd
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
A crowdsourcing quality control model for tasks distributed in parallel
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Micro perceptual human computation for visual tasks
ACM Transactions on Graphics (TOG)
Crowdsourcing semantic data management: challenges and opportunities
Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics
On aggregating labels from multiple crowd workers to infer relevance of documents
ECIR'12 Proceedings of the 34th European conference on Advances in Information Retrieval
Putting humans in the loop: Social computing for Water Resources Management
Environmental Modelling & Software
CDAS: a crowdsourcing data analytics system
Proceedings of the VLDB Endowment
Learning from crowds in the presence of schools of thought
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
CrowdER: crowdsourcing entity resolution
Proceedings of the VLDB Endowment
Whom to ask?: jury selection for decision making tasks on micro-blog services
Proceedings of the VLDB Endowment
CrowdScape: interactively visualizing user behavior and output
Proceedings of the 25th annual ACM symposium on User interface software and technology
Crowd IQ: measuring the intelligence of crowdsourcing platforms
Proceedings of the 3rd Annual ACM Web Science Conference
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
AutoMan: a platform for integrating human-based and digital computation
Proceedings of the ACM international conference on Object oriented programming systems languages and applications
An analysis of systematic judging errors in information retrieval
Proceedings of the 21st ACM international conference on Information and knowledge management
Constructing test collections by inferring document relevance via extracted relevant information
Proceedings of the 21st ACM international conference on Information and knowledge management
Combining human and computation intelligence: the case of data interlinking tools
International Journal of Metadata, Semantics and Ontologies
Map to humans and reduce error: crowdsourcing for deduplication applied to digital libraries
Proceedings of the 21st ACM international conference on Information and knowledge management
Proceedings of the 21st ACM international conference on Information and knowledge management
Nichesourcing: harnessing the power of crowds of experts
EKAW'12 Proceedings of the 18th international conference on Knowledge Engineering and Knowledge Management
CrowdMap: crowdsourcing ontology alignment with microtasks
ISWC'12 Proceedings of the 11th international conference on The Semantic Web - Volume Part I
Pay by the bit: an information-theoretic metric for collective human judgment
Proceedings of the 2013 conference on Computer supported cooperative work
Enhancing reliability using peer consistency evaluation in human computation
Proceedings of the 2013 conference on Computer supported cooperative work
Proceedings of the 2013 conference on Computer supported cooperative work
Proceedings of the 2013 conference on Computer supported cooperative work
Auction-based crowdsourcing supporting skill management
Information Systems
How to filter out random clickers in a crowdsourcing-based study?
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
Wally: crowd powered image matching on tablets
Proceedings of the First International Workshop on Crowdsourcing and Data Mining
Proceedings of the VLDB Endowment
An analysis of human factors and label accuracy in crowdsourcing relevance judgments
Information Retrieval
Implementing crowdsourcing-based relevance experimentation: an industrial perspective
Information Retrieval
Tagging human activities in video by crowdsourcing
Proceedings of the 3rd ACM conference on International conference on multimedia retrieval
Experiences surveying the crowd: reflections on methods, participation, and reliability
Proceedings of the 5th Annual ACM Web Science Conference
Leveraging transitive relations for crowdsourced joins
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
An online cost sensitive decision-making method in crowdsourcing systems
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
Efficient crowdsourcing for multi-class labeling
Proceedings of the ACM SIGMETRICS/international conference on Measurement and modeling of computer systems
Priming creativity through improvisation on an adaptive musical instrument
Proceedings of the 9th ACM Conference on Creativity & Cognition
It is about time: time aware quality management for interactive systems with humans in the loop
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Labor dynamics in a mobile micro-task market
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A document rating system for preference judgements
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
BATC: a benchmark for aggregation techniques in crowdsourcing
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Towards a generic framework for trustworthy spatial crowdsourcing
Proceedings of the 12th International ACM Workshop on Data Engineering for Wireless and Mobile Acess
A threshold method for imbalanced multiple noisy labeling
Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
Peer and self assessment in massive online classes
ACM Transactions on Computer-Human Interaction (TOCHI)
An analysis of crowd workers mistakes for specific and complex relevance assessment task
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Combining human and computation intelligence: the case of data interlinking tools
International Journal of Metadata, Semantics and Ontologies
Approaches to adversarial drift
Proceedings of the 2013 ACM workshop on Artificial intelligence and security
GeoTruCrowd: trustworthy query answering with spatial crowdsourcing
Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
The motivations and experiences of the on-demand mobile workforce
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
AskSheet: efficient human computation for decision making with spreadsheets
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Lab experiment vs. crowdsourcing: a comparative user study on Skype call quality
Proceedings of the 9th Asian Internet Engineering Conference
Question selection for crowd entity resolution
Proceedings of the VLDB Endowment
Trust, but verify: predicting contribution quality for knowledge base construction and curation
Proceedings of the 7th ACM international conference on Web search and data mining
Evocation: analyzing and propagating a semantic link based on free word association
Language Resources and Evaluation
STFU NOOB!: predicting crowdsourced decisions on toxic behavior in online games
Proceedings of the 23rd international conference on World wide web
Community-based bayesian aggregation models for crowdsourcing
Proceedings of the 23rd international conference on World wide web
The wisdom of minority: discovering and targeting the right group of workers for crowdsourcing
Proceedings of the 23rd international conference on World wide web
Repeated labeling using multiple noisy labelers
Data Mining and Knowledge Discovery
Evaluation in Music Information Retrieval
Journal of Intelligent Information Systems
Hi-index | 0.00 |
Crowdsourcing services, such as Amazon Mechanical Turk, allow for easy distribution of small tasks to a large number of workers. Unfortunately, since manually verifying the quality of the submitted results is hard, malicious workers often take advantage of the verification difficulty and submit answers of low quality. Currently, most requesters rely on redundancy to identify the correct answers. However, redundancy is not a panacea. Massive redundancy is expensive, increasing significantly the cost of crowdsourced solutions. Therefore, we need techniques that will accurately estimate the quality of the workers, allowing for the rejection and blocking of the low-performing workers and spammers. However, existing techniques cannot separate the true (unrecoverable) error rate from the (recoverable) biases that some workers exhibit. This lack of separation leads to incorrect assessments of a worker's quality. We present algorithms that improve the existing state-of-the-art techniques, enabling the separation of bias and error. Our algorithm generates a scalar score representing the inherent quality of each worker. We illustrate how to incorporate cost-sensitive classification errors in the overall framework and how to seamlessly integrate unsupervised and supervised techniques for inferring the quality of the workers. We present experimental results demonstrating the performance of the proposed algorithm under a variety of settings.