CYC: a large-scale investment in knowledge infrastructure
Communications of the ACM
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Extracting and evaluating general world knowledge from the Brown corpus
HLT-NAACL-TEXTMEANING '03 Proceedings of the HLT-NAACL 2003 workshop on Text meaning - Volume 9
Can we derive general world knowledge from texts?
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Weblogs as a source for extracting general world knowledge
Proceedings of the fifth international conference on Knowledge capture
Financial incentives and the "performance of crowds"
Proceedings of the ACM SIGKDD Workshop on Human Computation
Open information extraction from the web
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Open knowledge extraction through compositional language processing
STEP '08 Proceedings of the 2008 Conference on Semantics in Text Processing
Creating speech and language data with Amazon's Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Bootstrapping a Game with a Purpose for Commonsense Collection
ACM Transactions on Intelligent Systems and Technology (TIST)
Assessing internet video quality using crowdsourcing
Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia
Reporting bias and knowledge acquisition
Proceedings of the 2013 workshop on Automated knowledge base construction
Hi-index | 0.00 |
Efforts to automatically acquire world knowledge from text suffer from the lack of an easy means of evaluating the resulting knowledge. We describe initial experiments using Mechanical Turk to crowdsource evaluation to non-experts for little cost, resulting in a collection of factoids with associated quality judgements. We describe the method of acquiring usable judgements from the public and the impact of such large-scale evaluation on the task of knowledge acquisition.