Feedback effects between similarity and social influence in online communities
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
CrowdDB: answering queries with crowdsourcing
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Proceedings of the VLDB Endowment
Minimizing Seed Set for Viral Marketing
ICDM '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining
CrowdScreen: algorithms for filtering data with humans
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
So who won?: dynamic max discovery with the crowd
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
CDAS: a crowdsourcing data analytics system
Proceedings of the VLDB Endowment
Towards a generic framework for trustworthy spatial crowdsourcing
Proceedings of the 12th International ACM Workshop on Data Engineering for Wireless and Mobile Acess
A transfer learning based framework of crowd-selection on twitter
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Maximum Complex Task Assignment: Towards Tasks Correlation in Spatial Crowdsourcing
Proceedings of International Conference on Information Integration and Web-based Applications & Services
Hi-index | 0.00 |
Databases often offer poor answers with respect to judgemental queries such as asking the best among the movies shown in recent months. Processing such queries requires human input for providing missing information in order to clarify uncertainty or inconsistency in queries. Nowadays, it is common to see people seeking answers on micro-blogs through asking or sharing questions with their friends. This can be easily done via smart phones, which diffuse a question to a large number of users through message propagation in microblogs. This trend is important and known as CrowdSearch. Due to conflicting attitudes among crowds, the majority vote is employed as a crowd-wisdom aggregation schema. In this demo, we show the problem of minimizing the monetary cost of a crowdsourced query, given the specified expected accuracy of the aggregated answer. We present CrowdSeed, a system that automatically integrates human input for processing queries imposed on microblogs. We demonstrate the effectiveness and efficiency of our system using real world data, as well as presenting interesting results from a game called "Who is in the CrowdSeed?".