Learning with kernels and logical representations

  • Authors:
  • Paolo Frasconi;Andrea Passerini

  • Affiliations:
  • Machine Learning and Neural Networks Group, Dipartimento di Sistemi e Informatica, Università degli Studi di Firenze, Italy;Machine Learning and Neural Networks Group, Dipartimento di Sistemi e Informatica, Università degli Studi di Firenze, Italy

  • Venue:
  • Probabilistic inductive logic programming
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this chapter, we describe a view of statistical learning in the inductive logic programming setting based on kernel methods. The relational representation of data and background knowledge are used to form a kernel function, enabling us to subsequently apply a number of kernel-based statistical learning algorithms. Different representational frameworks and associated algorithms are explored in this chapter. In kernels on Prolog proof trees, the representation of an example is obtained by recording the execution trace of a program expressing background knowledge. In declarative kernels, features are directly associated with mereotopological relations. Finally, in kFOIL, features correspond to the truth values of clauses dynamically generated by a greedy search algorithm guided by the empirical risk.