Data modelling versus ontology engineering
ACM SIGMOD Record
Learning Domain Ontologies from Document Warehouses and Dedicated Web Sites
Computational Linguistics
A semiotic metrics suite for assessing the quality of ontologies
Data & Knowledge Engineering - Special issue: Natural language and database and information systems: NLDB 03
An ontology engineering methodology for DOGMA
Applied Ontology - Ontological Foundations of Conceptual Modelling
Evaluating Automatically a Text Miner for Ontologies: A Catch-22 Situation?
OTM '08 Proceedings of the OTM 2008 Confederated International Conferences, CoopIS, DOA, GADA, IS, and ODBASE 2008. Part II on On the Move to Meaningful Internet Systems
Validating an Automated Evaluation Procedure for Ontology Triples in the Privacy Domain
Proceedings of the 2005 conference on Legal Knowledge and Information Systems: JURIX 2005: The Eighteenth Annual Conference
Strategies for the Evaluation of Ontology Learning
Proceedings of the 2008 conference on Ontology Learning and Population: Bridging the Gap between Text and Knowledge
A proposal to evaluate ontology content
Applied Ontology
Ontology Learning and Population: Bridging the Gap between Text and Knowledge - Volume 167 Frontiers in Artificial Intelligence and Applications
Business semantics management: A case study for competency-centric HRM
Computers in Industry
On how to perform a gold standard based evaluation of ontology learning
ISWC'06 Proceedings of the 5th international conference on The Semantic Web
Lexically evaluating ontology triples generated automatically from texts
ESWC'05 Proceedings of the Second European conference on The Semantic Web: research and Applications
DOGMA-MESS: a meaning evolution support system for interorganizational ontology engineering
ICCS'06 Proceedings of the 14th international conference on Conceptual Structures: inspiration and Application
Hi-index | 0.00 |
Evaluation of ontologies is increasingly becoming important as the number of available ontologies is steadily growing. Ontology evaluation is a labour intensive and laborious job. Hence, the need grows to come up with automated methods for ontology evaluation. In this paper, we report on experiments using a light-weight automated ontology evaluation procedure (called EvaLexon) developed earlier. The experiments are meant to test if the automated procedure can detect an improvement (or deterioration) in the quality of an ontology miner's output. Four research questions have been formulated on how to compare two rounds of ontology mining and how to assess the potential differences in quality between the rounds. The entire set-up and software infrastructure remain identical during the two rounds of ontology mining and evaluation. The main difference between the two rounds is the upfront manual removal by two human experts separately of irrelevant passages from the text corpus. Ideally, the EvaLexon procedure evaluates the ontology mining results in a similar way as the human experts do. The experiments show that the automated evaluation procedure is sensitive enough to detect a deterioration of the miner output quality. However, this sensitivity cannot be reliably qualified as similar to the behaviour of human experts as the latter seem to disagree themselves largely on which passages (and triples) are relevant or not. Novel ways of organising community-based ontology evaluation might be an interesting avenue to explore in order to cope with disagreements between evaluating experts.