Integrated process modeling: an ontological evaluation
Information Systems - The 11th international conference on advanced information systems engineering (CAiSE*
Towards a standard upper ontology
Proceedings of the international conference on Formal Ontology in Information Systems - Volume 2001
Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A Generic Ontology for Collaborative Ontology-Development Workflows
EKAW '08 Proceedings of the 16th international conference on Knowledge Engineering: Practice and Patterns
Collecting Community-Based Mappings in an Ontology Repository
ISWC '08 Proceedings of the 7th International Conference on The Semantic Web
Matching Schemas in Online Communities: A Web 2.0 Approach
ICDE '08 Proceedings of the 2008 IEEE 24th International Conference on Data Engineering
MoKi: The Enterprise Modelling Wiki
ESWC 2009 Heraklion Proceedings of the 6th European Semantic Web Conference on The Semantic Web: Research and Applications
Financial incentives and the "performance of crowds"
Proceedings of the ACM SIGKDD Workshop on Human Computation
Evaluating ontologies: Towards a cognitive measure of quality
Information Systems
Soylent: a word processor with a crowd inside
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Human computation: a survey and taxonomy of a growing field
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Community-driven ontology matching
ESWC'06 Proceedings of the 3rd European conference on The Semantic Web: research and applications
OntoWiki – a tool for social, semantic collaboration
ISWC'06 Proceedings of the 5th international conference on The Semantic Web
Proceedings of the 21st international conference on World Wide Web
CrowdMap: crowdsourcing ontology alignment with microtasks
ISWC'12 Proceedings of the 11th international conference on The Semantic Web - Volume Part I
Pay by the bit: an information-theoretic metric for collective human judgment
Proceedings of the 2013 conference on Computer supported cooperative work
Hi-index | 0.00 |
Ontology evaluation has proven to be one of the more difficult problems in ontology engineering. Researchers proposed numerous methods to evaluate logical correctness of an ontology, its structure, or coverage of a domain represented by a corpus. However, evaluating whether or not ontology assertions correspond to the real world remains a manual and time-consuming task. In this paper, we explore the feasibility of using microtask crowdsourcing through Amazon Mechanical Turk to evaluate ontologies. Specifically, we look at the task of verifying the subclass--superclass hierarchy in ontologies. We demonstrate that the performance of Amazon Mechanical Turk workers (turkers) on this task is comparable to the performance of undergraduate students in a formal study. We explore the effects of the type of the ontology on the performance of turkers and demonstrate that turkers can achieve accuracy as high as 90% on verifying hierarchy statements form common-sense ontologies such as WordNet. Finally, we compare the performance of turkers to the performance of domain experts on verifying statements from an ontology in the biomedical domain. We report on lessons learned about designing ontology-evaluation experiments on Amazon Mechanical Turk. Our results demonstrate that microtask crowdsourcing can become a scalable and efficient component in ontology-engineering workflows.