Bayesian inductive logic programming

  • Authors:
  • Stephen Muggleton

  • Affiliations:
  • Oxford University Computing Laboratory, Wolfson Building, Parks Road, Oxford, OX1 3QD

  • Venue:
  • COLT '94 Proceedings of the seventh annual conference on Computational learning theory
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Inductive Logic Programming (ILP) involves the construction of first-order definite clause theories from examples and background knowledge. Unlike both traditional Machine Learning and Computational Learning Theory, ILP is based on lock-step development of Theory, Implementations and Applications. ILP systems have successful applications in the learning of structure-activity rules for drug design, semantic grammars rules, finite element mesh design rules and rules for prediction of protein structure and mutagenic molecules. The strong applications in ILP can be contrasted with relatively weak PAC-learning results (even highly restricted forms of logic programs are known to be prediction-hard). It has been recently argued that the mismatch is due to distributional assumptions made in application domains. These assumptions can be modelled as a Bayesian prior probability representing subjective degrees of belief. Other authors have argued for the use of Bayesian prior distributions for reasons different to those here, though this has not lead to a new model of polynomial-time learnability. Incorporation of Bayesian prior distributions over time-bounded hypotheses in PAC leads to a new model called U-learnability. It is argued that U-learnability is more appropriate than PAC for Universal (Turing computable) languages. Time-bounded logic programs have been shown to be polynomially U-learnable under certain distributions. The use of time bounded hypotheses enforces decidability and allows a unified characterization of speed-up learning and inductive learning. U-learnability has as special cases PAC and Natarajan's model of speed-up learning.