The state of the art in ontology learning: a framework for comparison
The Knowledge Engineering Review
Learning domain ontologies for Web service descriptions: an experiment in bioinformatics
WWW '05 Proceedings of the 14th international conference on World Wide Web
Learning Domain Ontologies from Document Warehouses and Dedicated Web Sites
Computational Linguistics
WordNet: similarity - measuring the relatedness of concepts
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Lexically evaluating ontology triples generated automatically from texts
ESWC'05 Proceedings of the Second European conference on The Semantic Web: research and Applications
An ontology engineering methodology for DOGMA
Applied Ontology - Ontological Foundations of Conceptual Modelling
Evaluating Automatically a Text Miner for Ontologies: A Catch-22 Situation?
OTM '08 Proceedings of the OTM 2008 Confederated International Conferences, CoopIS, DOA, GADA, IS, and ODBASE 2008. Part II on On the Move to Meaningful Internet Systems
Assessing iterations of an automated ontology evaluation procedure
OTM'10 Proceedings of the 2010 international conference on On the move to meaningful internet systems: Part II
Reflecting on a process to automatically evaluate ontological material generated automatically
OTM'10 Proceedings of the 2010 international conference on On the move to meaningful internet systems
An ontology engineering methodology for DOGMA
Applied Ontology - Ontological Foundations of Conceptual Modelling
Hi-index | 0.01 |
In this paper we validate a simple method to objectively assess the results of extracting material (c.q. triples) from text corpora to build ontologies. The EU Privacy Directive has been used as corpus. Two domain experts have manually validated the results. Several experimental settings have been tried. As the evaluation scores are rather modest (sensitivity or recall: 0.5, specificity: 0.539 and precision: 0.21), we see them as a baseline reference for future experiments. Nevertheless, the human experts appreciate the automated evaluation procedure as sufficiently effective and time-saving for usage in real-life ontology modelling situations.