Flattening and Saturation: Two Representation Changes for Generalization
Machine Learning - Special issue on evaluating and changing representation
An incremental algorithm for a generalization of the shortest-path problem
Journal of Algorithms
IEEE Intelligent Systems
Combining Statistical and Relational Methods for Learning in Hypertext Domains
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
ANF: a fast and scalable tool for data mining in massive graphs
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Biological applications of multi-relational data mining
ACM SIGKDD Explorations Newsletter
Link mining: a new data mining challenge
ACM SIGKDD Explorations Newsletter
Learning relations by pathfinding
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Using the Bottom Clause and Mode Declarations on FOL Theory Revision from Examples
ILP '08 Proceedings of the 18th international conference on Inductive Logic Programming
Compile the Hypothesis Space: Do it Once, Use it Often
Fundamenta Informaticae - Progress on Multi-Relational Data Mining
An ILP System for Learning Head Output Connected Predicates
EPIA '09 Proceedings of the 14th Portuguese Conference on Artificial Intelligence: Progress in Artificial Intelligence
Online structure learning for Markov logic networks
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II
Compile the Hypothesis Space: Do it Once, Use it Often
Fundamenta Informaticae - Progress on Multi-Relational Data Mining
Hi-index | 0.01 |
Learning from multi-relational domains has gained increasing attention over the past few years. Inductive logic programming (ILP) systems, which often rely on hill-climbing heuristics in learning first-order concepts, have been a dominating force in the area of multi-relational concept learning. However, hill-climbing heuristics are susceptible to local maxima and plateaus. In this paper, we show how we can exploit the links between objects in multi-relational data to help a first-order rule learning system direct the search by explicitly traversing these links to find paths between variables of interest. Our contributions are twofold: (i) we extend the pathfinding algorithm by Richards and Mooney [12] to make use of mode declarations, which specify the mode of call (input or output) for predicate variables, and (ii) we apply our extended path finding algorithm to saturated bottom clauses, which anchor one end of the search space, allowing us to make use of background knowledge used to build the saturated clause to further direct search. Experimental results on a medium-sized dataset show that path finding allows one to consider interesting clauses that would not easily be found by Aleph.