Crowd-sourced knowledge bases

  • Authors:
  • Yang Sok Kim;Byeong Ho Kang;Seung Hwan Ryu;Paul Compton;Soyeon Caren Han;Tim Menzies

  • Affiliations:
  • School of Computer Science and Engineering, The University of New South Wales, Sydney, New South Wales, Australia;School of Computer Science and Engineering, The University of New South Wales, Sydney, New South Wales, Australia;School of Computer Science and Engineering, The University of New South Wales, Sydney, New South Wales, Australia;School of Computer Science and Engineering, The University of New South Wales, Sydney, New South Wales, Australia;School of Computer Science and Engineering, The University of New South Wales, Sydney, New South Wales, Australia;School of Computer Science and Engineering, The University of New South Wales, Sydney, New South Wales, Australia

  • Venue:
  • PKAW'12 Proceedings of the 12th Pacific Rim conference on Knowledge Management and Acquisition for Intelligent Systems
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Crowdsourcing is a low cost way of obtaining human judgements on a large number of items, but the knowledge in these judgements is not reusable and further items to be processed require further human judgement. Ideally one could also obtain the reasons people have for these judgements, so the ability to make the same judgements could be incorporated into a crowd-sourced knowledge base. This paper reports on experiments with 27 students building knowledge bases to classify the same set of 1000 documents. We have assessed the performance of the students building the knowledge bases using the same students to assess the performance of each other's knowledge bases on a set of test documents. We have explored simple techniques for combining the knowledge from the students. These results suggest that although people vary in document classification, simple merging may produce reasonable consensus knowledge bases.