An extended transformation approach to inductive logic programming

  • Authors:
  • Nada Lavrač;Peter A. Flach

  • Affiliations:
  • Jožef Stefan Institute, Ljubljana, Solvenia;University of Bristol, Bristol, United Kingdom

  • Venue:
  • ACM Transactions on Computational Logic (TOCL) - Special issue devoted to Robert A. Kowalski
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Inductive logic programming (ILP) is concerned with learning relational descriptions that typically have the form of logic programs. In a transformation approach, an ILP task is transformed into an equivalent learning task in a different representation formalism. Propositionalization is a particular transformation method, in which the ILP task is compiled to an attribute-value learning task. The main restriction of propositionalization methods such as LINUS is that they are unable to deal with nondeterminate local variables in the body of hypothesis clauses. In this paper we show how this limitation can be overcome., by systematic first-order feature construction using a particular individual-centered feature bias. The approach can be applied in any domain where there is a clear notion of individual. We also show how to improve upon exhaustive first-order feature construction by using a relevancy filter. The proposed approach is illustrated on the “trains” and “mutagenesis” ILP domains.