Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Real-Time Computerized Annotation of Pictures
IEEE Transactions on Pattern Analysis and Machine Intelligence
Crowdsourcing for relevance evaluation
ACM SIGIR Forum
Integrating conflicting data: the role of source dependence
Proceedings of the VLDB Endowment
Crowd translator: on building localized speech recognizers through micropayments
ACM SIGOPS Operating Systems Review
Proceedings of the international conference on Multimedia information retrieval
CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones
Proceedings of the 8th international conference on Mobile systems, applications, and services
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Creating speech and language data with Amazon's Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Crowdsourcing and language studies: the new generation of linguistic data
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Collecting image annotations using Amazon's Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Crowdsourcing document relevance assessment with Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Human-assisted graph search: it's okay to ask questions
Proceedings of the VLDB Endowment
CrowdDB: answering queries with crowdsourcing
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Demonstration of Qurk: a query processor for humanoperators
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Who moderates the moderators?: crowdsourcing abuse detection in user-generated content
Proceedings of the 12th ACM conference on Electronic commerce
Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Using the crowd for top-k and group-by queries
Proceedings of the 16th International Conference on Database Theory
CrowdSeed: query processing on microblogs
Proceedings of the 16th International Conference on Extending Database Technology
Leveraging transitive relations for crowdsourced joins
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
An online cost sensitive decision-making method in crowdsourcing systems
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
Towards a generic framework for trustworthy spatial crowdsourcing
Proceedings of the 12th International ACM Workshop on Data Engineering for Wireless and Mobile Acess
Evaluating the crowd with confidence
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
A transfer learning based framework of crowd-selection on twitter
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Database research at the National University of Singapore
ACM SIGMOD Record
Optimizing plurality for human intelligence tasks
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
A human-machine method for web table understanding
WAIM'13 Proceedings of the 14th international conference on Web-Age Information Management
Answering planning queries with the crowd
Proceedings of the VLDB Endowment
Large-scale linked data integration using probabilistic reasoning and crowdsourcing
The VLDB Journal — The International Journal on Very Large Data Bases
Hi-index | 0.00 |
Some complex problems, such as image tagging and natural language processing, are very challenging for computers, where even state-of-the-art technology is yet able to provide satisfactory accuracy. Therefore, rather than relying solely on developing new and better algorithms to handle such tasks, we look to the crowdsourcing solution -- employing human participation -- to make good the shortfall in current technology. Crowdsourcing is a good supplement to many computer tasks. A complex job may be divided into computer-oriented tasks and human-oriented tasks, which are then assigned to machines and humans respectively. To leverage the power of crowdsourcing, we design and implement a Crowdsourcing Data Analytics System, CDAS. CDAS is a framework designed to support the deployment of various crowdsourcing applications. The core part of CDAS is a quality-sensitive answering model, which guides the crowdsourcing engine to process and monitor the human tasks. In this paper, we introduce the principles of our quality-sensitive model. To satisfy user required accuracy, the model guides the crowdsourcing query engine for the design and processing of the corresponding crowdsourcing jobs. It provides an estimated accuracy for each generated result based on the human workers' historical performances. When verifying the quality of the result, the model employs an online strategy to reduce waiting time. To show the effectiveness of the model, we implement and deploy two analytics jobs on CDAS, a twitter sentiment analytics job and an image tagging job. We use real Twitter and Flickr data as our queries respectively. We compare our approaches with state-of-the-art classification and image annotation techniques. The results show that the human-assisted methods can indeed achieve a much higher accuracy. By embedding the quality-sensitive model into crowdsourcing query engine, we effectively reduce the processing cost while maintaining the required query answer quality.