Computing incoherence explanations for learned ontologies

  • Authors:
  • Daniel Fleischhacker;Christian Meilicke;Johanna Völker;Mathias Niepert

  • Affiliations:
  • Data & Web Science Research Group, University of Mannheim, Germany;Data & Web Science Research Group, University of Mannheim, Germany;Data & Web Science Research Group, University of Mannheim, Germany;Computer Science & Engineering, University of Washington

  • Venue:
  • RR'13 Proceedings of the 7th international conference on Web Reasoning and Rule Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent developments in ontology learning research have made it possible to generate significantly more expressive ontologies. Novel approaches can support human ontology engineers in rapidly creating logically complex and richly axiomatized schemas. Although the higher complexity increases the likelihood of modeling flaws, there is currently little tool support for diagnosing and repairing ontologies produced by automated approaches. Off-the-shelf debuggers based on logical reasoning struggle with the particular characteristics of learned ontologies. They are mostly inefficient when it comes to detecting modeling flaws, or highlighting all of the logical reasons for the discovered problems. In this paper, we propose a reasoning approach for discovering unsatisfiable classes and properties that is optimized for handling automatically generated, expressive ontologies. We describe our implementation of this approach, which we evaluated by comparing it with state-of-the-art reasoners.