Reflecting on a process to automatically evaluate ontological material generated automatically

  • Authors:
  • Peter Spyns

  • Affiliations:
  • Vrije Universiteit Brussel, STAR Lab, Brussel, Belgium

  • Venue:
  • OTM'10 Proceedings of the 2010 international conference on On the move to meaningful internet systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Ontology evaluation is a labour intensive and laborious job. Hence, it is relevant to investigate automated methods. But before an automated ontology evaluation method is considered reliable and consistent, it must be validated by human experts. In this paper we want to present a meta-analysis of an automated ontology evaluation procedure as it has been applied in earlier tests. It goes without saying that many of the principles touched upon can be applied in the context of ontology evaluation as such, irrespective of it being automated or not. Consequently, the overall quality of an ontology is not only determined by the quality of the artifact itself, but also by the the quality of its evaluation method. Providing an analysis on the set-up and conditions under which an evaluation of an ontology takes place can only be beneficial to the entire domain of ontology engineering.