A Refinement Operator for Description Logics

  • Authors:
  • Liviu Badea;Shan-Hwei Nienhuys-Cheng

  • Affiliations:
  • -;-

  • Venue:
  • ILP '00 Proceedings of the 10th International Conference on Inductive Logic Programming
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

While the problem of learning logic programs has been extensively studied in ILP, the problem of learning in description logics (DLs) has been tackled mostly by empirical means. Learning in DLs is however worthwhile, since both Horn logic and description logics are widely used knowledge representation formalisms, their expressive powers being incomparable (neither includes the other as a fragment). Unlike most approaches to learning in description logics, which provide bottom-up (and typically overly specific) least generalizations of the examples, this paper addresses learning in DLs using downward (and upward) refinement operators. Technically, we construct a complete and proper refinement operator for the ALER description logic (to avoid overfitting, we disallow disjunctions from the target DL). Although no minimal refinement operators exist for ALER, we show that we can achieve minimality of all refinement steps, except the ones that introduce the ⊥ concept. We additionally prove that complete refinement operators for ALER cannot be locally finite and suggest how this problem can be overcome by an MDL search heuristic. We also discuss the influence of the Open World Assumption (typically made in DLs) on example coverage.