Multi-class Prediction Using Stochastic Logic Programs
Inductive Logic Programming
VC dimension and inner product space induced by Bayesian networks
International Journal of Approximate Reasoning
kFOIL: learning simple relational kernels
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Kernel methods for mining instance data in ontologies
ISWC'07/ASWC'07 Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference
Learning with kernels and logical representations
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
An overview of AI research in Italy
Artificial intelligence
Learning with kernels and logical representations
Probabilistic inductive logic programming
Protein fold discovery using stochastic logic programs
Probabilistic inductive logic programming
Learning with semantic kernels for clausal knowledge bases
ISMIS'11 Proceedings of the 19th international conference on Foundations of intelligent systems
A declarative kernel for concept descriptions
ISMIS'06 Proceedings of the 16th international conference on Foundations of Intelligent Systems
Fisher kernels for relational data
ECML'06 Proceedings of the 17th European conference on Machine Learning
Induction of robust classifiers for web ontologies through kernel machines
Web Semantics: Science, Services and Agents on the World Wide Web
Data Mining and Knowledge Discovery
Conceptual clustering of multi-relational data
ILP'11 Proceedings of the 21st international conference on Inductive Logic Programming
Hi-index | 0.00 |
We develop kernels for measuring the similarity between relational instances using background knowledge expressed in first-order logic. The method allows us to bridge the gap between traditional inductive logic programming (ILP) representations and statistical approaches to supervised learning. Logic programs are first used to generate proofs of given visitor programs that use predicates declared in the available background knowledge. A kernel is then defined over pairs of proof trees. The method can be used for supervised learning tasks and is suitable for classification as well as regression. We report positive empirical results on Bongard-like and M-of-N problems that are difficult or impossible to solve with traditional ILP techniques, as well as on real bioinformatics and chemoinformatics data sets.