Knowledge Acquisition From Multiple Experts: An Empirical Study
Management Science
Hierarchical classification of Web content
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Simple and accurate feature selection for hierarchical categorisation
Proceedings of the 2002 ACM symposium on Document engineering
Knowledge Acquisition from Multiple Experts Based on Semantics of Concepts
EKAW '99 Proceedings of the 11th European Workshop on Knowledge Acquisition, Modeling and Management
Building large knowledge bases by mass collaboration
Proceedings of the 2nd international conference on Knowledge capture
Adaptive Web Document Classification with MCRDR
ITCC '04 Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC'04) Volume 2 - Volume 2
Support vector machines classification with a very large-scale taxonomy
ACM SIGKDD Explorations Newsletter - Natural language processing and text mining
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Using Crowdsourcing and Active Learning to Track Sentiment in Online Media
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
TurKit: human computation algorithms on mechanical turk
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Crowdsourcing systems on the World-Wide Web
Communications of the ACM
Turkalytics: analytics for human computation
Proceedings of the 20th international conference on World wide web
From Crowdsourcing to Crowdservicing
IEEE Internet Computing
Emerging theories and models of human computation systems: a brief survey
Proceedings of the 2nd international workshop on Ubiquitous crowdsouring
Hi-index | 0.00 |
Crowdsourcing is a low cost way of obtaining human judgements on a large number of items, but the knowledge in these judgements is not reusable and further items to be processed require further human judgement. Ideally one could also obtain the reasons people have for these judgements, so the ability to make the same judgements could be incorporated into a crowd-sourced knowledge base. This paper reports on experiments with 27 students building knowledge bases to classify the same set of 1000 documents. We have assessed the performance of the students building the knowledge bases using the same students to assess the performance of each other's knowledge bases on a set of test documents. We have explored simple techniques for combining the knowledge from the students. These results suggest that although people vary in document classification, simple merging may produce reasonable consensus knowledge bases.