Computing with Noisy Information
SIAM Journal on Computing
PageRank as a function of the damping factor
WWW '05 Proceedings of the 14th international conference on World Wide Web
Ordering by weighted number of wins gives a good ranking for weighted tournaments
SODA '06 Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Aggregating inconsistent information: Ranking and clustering
Journal of the ACM (JACM)
A Short Introduction to Computational Social Choice
SOFSEM '07 Proceedings of the 33rd conference on Current Trends in Theory and Practice of Computer Science
Crowdsourcing for relevance evaluation
ACM SIGIR Forum
Sorting and Selection with Imprecise Comparisons
ICALP '09 Proceedings of the 36th International Colloquium on Automata, Languages and Programming: Part I
Improved bounds for computing Kemeny rankings
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Journal of Artificial Intelligence Research
PageRank as a weak tournament solution
WINE'07 Proceedings of the 3rd international conference on Internet and network economics
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
TurKit: human computation algorithms on mechanical turk
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Human-assisted graph search: it's okay to ask questions
Proceedings of the VLDB Endowment
Turkalytics: analytics for human computation
Proceedings of the 20th international conference on World wide web
CrowdDB: answering queries with crowdsourcing
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Proceedings of the VLDB Endowment
Deco: declarative crowdsourcing
Proceedings of the 21st ACM international conference on Information and knowledge management
Using the crowd for top-k and group-by queries
Proceedings of the 16th International Conference on Database Theory
Proceedings of the VLDB Endowment
CrowdSeed: query processing on microblogs
Proceedings of the 16th International Conference on Extending Database Technology
Leveraging transitive relations for crowdsourced joins
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
An online cost sensitive decision-making method in crowdsourcing systems
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
Evaluating the crowd with confidence
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Peer and self assessment in massive online classes
ACM Transactions on Computer-Human Interaction (TOCHI)
Optimizing plurality for human intelligence tasks
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Maximizing the number of worker's self-selected tasks in spatial crowdsourcing
Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
Answering planning queries with the crowd
Proceedings of the VLDB Endowment
Hi-index | 0.00 |
We consider a crowdsourcing database system that may cleanse, populate, or filter its data by using human workers. Just like a conventional DB system, such a crowdsourcing DB system requires data manipulation functions such as select, aggregate, maximum, average, and so on, except that now it must rely on human operators (that for example compare two objects) with very different latency, cost and accuracy characteristics. In this paper, we focus on one such function, maximum, that finds the highest ranked object or tuple in a set. In particularm we study two problems: given a set of votes (pairwise comparisons among objects), how do we select the maximum? And how do we improve our estimate by requesting additional votes? We show that in a crowdsourcing DB system, the optimal solution to both problems is NP-Hard. We then provide heuristic functions to select the maximum given evidence, and to select additional votes. We experimentally evaluate our functions to highlight their strengths and weaknesses.