Assessing iterations of an automated ontology evaluation procedure

  • Authors:
  • Peter Spyns

  • Affiliations:
  • Vrije Universiteit Brussel, STAR Lab, Brussel, Belgium

  • Venue:
  • OTM'10 Proceedings of the 2010 international conference on On the move to meaningful internet systems: Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evaluation of ontologies is increasingly becoming important as the number of available ontologies is steadily growing. Ontology evaluation is a labour intensive and laborious job. Hence, the need grows to come up with automated methods for ontology evaluation. In this paper, we report on experiments using a light-weight automated ontology evaluation procedure (called EvaLexon) developed earlier. The experiments are meant to test if the automated procedure can detect an improvement (or deterioration) in the quality of an ontology miner's output. Four research questions have been formulated on how to compare two rounds of ontology mining and how to assess the potential differences in quality between the rounds. The entire set-up and software infrastructure remain identical during the two rounds of ontology mining and evaluation. The main difference between the two rounds is the upfront manual removal by two human experts separately of irrelevant passages from the text corpus. Ideally, the EvaLexon procedure evaluates the ontology mining results in a similar way as the human experts do. The experiments show that the automated evaluation procedure is sensitive enough to detect a deterioration of the miner output quality. However, this sensitivity cannot be reliably qualified as similar to the behaviour of human experts as the latter seem to disagree themselves largely on which passages (and triples) are relevant or not. Novel ways of organising community-based ontology evaluation might be an interesting avenue to explore in order to cope with disagreements between evaluating experts.