Learnability of description logics

  • Authors:
  • William W. Cohen;Haym Hirsh

  • Affiliations:
  • AI Principles Research Department, AT&T Bell Laboratories, 600 Mountain Avenue, Murray Hill, NJ;Department of Computer Science, Hill Center, Busch Campus, Rutgers University, New Brunswick, NJ

  • Venue:
  • COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper considers the learnability of subsets of first-order logic. Piror work has established two boundaries of learnability: Haussler [1989] has shown that conjunctions in first-order logic cannot be learned in the Valiant model, even if the form of the conjunction is highly restricted; on the other hand, Valiant [1984] has shown that propositional conjunctions are learnable. In this paper, we study the learnability of the restricted first-order logics known as description logics. Description logics are also subsets of predicate calculus, but are expressed using a different syntax, allowing a different set of syntactic restrictions to be explored. In this paper, we first define a simple description logic, summarize some results on its expressive power, and then analyze its learnability. It is shown that the full logic cannot be tractably learned; however, syntactic restrictions that enable tractable learning exist. The learnability results hold even if the alphabets of primitive classes and roles (over which descriptions are constructed) are infinite; our positive result thus generalizes not only the result of Valiant [1984] on learning monomials to learning concepts in our (conjunctive) first order language, but also the result of Blum [1990] on learning monomials over infinite attribute spaces.