Evaluating ontological decisions with OntoClean
Communications of the ACM - Ontology: different ways of representing the same concept
Measuring Similarity between Ontologies
EKAW '02 Proceedings of the 13th International Conference on Knowledge Engineering and Knowledge Management. Ontologies and the Semantic Web
The state of the art in ontology learning: a framework for comparison
The Knowledge Engineering Review
Learning domain ontologies for Web service descriptions: an experiment in bioinformatics
WWW '05 Proceedings of the 14th international conference on World Wide Web
Learning Domain Ontologies from Document Warehouses and Dedicated Web Sites
Computational Linguistics
Identification of relevant terms to support the construction of domain ontologies
HLTKM '01 Proceedings of the workshop on Human Language Technology and Knowledge Management - Volume 2001
A semiotic metrics suite for assessing the quality of ontologies
Data & Knowledge Engineering - Special issue: Natural language and database and information systems: NLDB 03
Games with a Purpose for the Semantic Web
IEEE Intelligent Systems
How to Design Better Ontology Metrics
ESWC '07 Proceedings of the 4th European conference on The Semantic Web: Research and Applications
Validating an Automated Evaluation Procedure for Ontology Triples in the Privacy Domain
Proceedings of the 2005 conference on Legal Knowledge and Information Systems: JURIX 2005: The Eighteenth Annual Conference
The automatic creation of literature abstracts
IBM Journal of Research and Development
Ontology learning for search applications
OTM'07 Proceedings of the 2007 OTM Confederated international conference on On the move to meaningful internet systems: CoopIS, DOA, ODBASE, GADA, and IS - Volume Part I
Object role modelling for ontology engineering in the DOGMA framework
OTM'05 Proceedings of the 2005 OTM Confederated international conference on On the Move to Meaningful Internet Systems
Ranking ontologies with AKTiveRank
ISWC'06 Proceedings of the 5th international conference on The Semantic Web
Lexically evaluating ontology triples generated automatically from texts
ESWC'05 Proceedings of the Second European conference on The Semantic Web: research and Applications
Assessing iterations of an automated ontology evaluation procedure
OTM'10 Proceedings of the 2010 international conference on On the move to meaningful internet systems: Part II
Reflecting on a process to automatically evaluate ontological material generated automatically
OTM'10 Proceedings of the 2010 international conference on On the move to meaningful internet systems
Hi-index | 0.00 |
Evaluation of ontologies is increasingly becoming important as the number of available ontologies is steadily growing. Ontology evaluation is a labour intensive and laborious job. Hence, the importance to come up with automated methods. Before automated methods achieve reliability and widespread adoption, these methods themselves have to be assessed first by human experts. We summarise experiences acquired when trying to assess an automated ontology evaluation method. Previously we have implemented and evaluated a light-weight automatic ontology evaluation method that can be easily applied by knowledge engineers to rapidly determine whether or not the most important notions and relationships are represented in a set of ontology triplets. Domain experts have contributed to the assessment effort. Various assessment experiments have been carried out. In this paper, we focus particularly on the practical lessons learnt, in particular the limitations that result from real life constraints, rather than on the precise method to automatically evaluate results of an ontology miner. A typology of potential evaluation biases is applied to demonstrate the substantial impact conditions in which an evaluation happens can have on the reliability of the outcomes of an evaluation exercise. As a result, the notion of "meta-evaluation of ontologies" is introduced and its importance illustrated. The main conclusion is that still more domain experts have to be involved, which is exactly what we try to avoid by applying an automated evaluation procedure. A catch-22 situation?