Measuring Similarity between Ontologies
EKAW '02 Proceedings of the 13th International Conference on Knowledge Engineering and Knowledge Management. Ontologies and the Semantic Web
A semiotic metrics suite for assessing the quality of ontologies
Data & Knowledge Engineering - Special issue: Natural language and database and information systems: NLDB 03
Evaluating Automatically a Text Miner for Ontologies: A Catch-22 Situation?
OTM '08 Proceedings of the OTM 2008 Confederated International Conferences, CoopIS, DOA, GADA, IS, and ODBASE 2008. Part II on On the Move to Meaningful Internet Systems
Validating an Automated Evaluation Procedure for Ontology Triples in the Privacy Domain
Proceedings of the 2005 conference on Legal Knowledge and Information Systems: JURIX 2005: The Eighteenth Annual Conference
Strategies for the Evaluation of Ontology Learning
Proceedings of the 2008 conference on Ontology Learning and Population: Bridging the Gap between Text and Knowledge
A proposal to evaluate ontology content
Applied Ontology
Validating a tool for evaluating automatically lexical triples mined from texts
OTM'07 Proceedings of the 2007 OTM confederated international conference on On the move to meaningful internet systems - Volume Part I
Ontology Learning and Population: Bridging the Gap between Text and Knowledge - Volume 167 Frontiers in Artificial Intelligence and Applications
On how to perform a gold standard based evaluation of ontology learning
ISWC'06 Proceedings of the 5th international conference on The Semantic Web
Lexically evaluating ontology triples generated automatically from texts
ESWC'05 Proceedings of the Second European conference on The Semantic Web: research and Applications
DOGMA-MESS: a meaning evolution support system for interorganizational ontology engineering
ICCS'06 Proceedings of the 14th international conference on Conceptual Structures: inspiration and Application
Hi-index | 0.00 |
Ontology evaluation is a labour intensive and laborious job. Hence, it is relevant to investigate automated methods. But before an automated ontology evaluation method is considered reliable and consistent, it must be validated by human experts. In this paper we want to present a meta-analysis of an automated ontology evaluation procedure as it has been applied in earlier tests. It goes without saying that many of the principles touched upon can be applied in the context of ontology evaluation as such, irrespective of it being automated or not. Consequently, the overall quality of an ontology is not only determined by the quality of the artifact itself, but also by the the quality of its evaluation method. Providing an analysis on the set-up and conditions under which an evaluation of an ontology takes place can only be beneficial to the entire domain of ontology engineering.