Kernels on Prolog Proof Trees: Statistical Learning in the ILP Setting
The Journal of Machine Learning Research
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
kFOIL: learning simple relational kernels
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Kernels on prolog ground terms
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Probabilistic inductive logic programming
Probabilistic inductive logic programming
Learning with kernels and logical representations
Probabilistic inductive logic programming
Relational kernel machines for learning from graph-structured RDF data
ESWC'11 Proceedings of the 8th extended semantic web conference on The semantic web: research and applications - Volume Part I
Hi-index | 0.00 |
Choosing an appropriate kernel function is a fundamental step for the application of many popular statistical learning algorithms. Kernels are actually the natural entry point for inserting prior knowledge into the learning process. Inductive logic programming (ILP), on the other hand, offers a powerful and flexible framework for describing existing background knowledge and extracting additional knowledge from the data. It therefore seems natural to explore the synergy between these two important paradigms of machine learning. In this extended abstract (see [1] for a longer version), I briefly review some of our recent work about statistical learning with kernel machines in the ILP setting.