Logical settings for concept-learning
Artificial Intelligence
A Restarted Strategy for Efficient Subsumption Testing
Fundamenta Informaticae - Progress on Multi-Relational Data Mining
Reducing examples in relational learning with bounded-treewidth hypotheses
NFMCP'12 Proceedings of the First international conference on New Frontiers in Mining Complex Patterns
Hi-index | 0.00 |
We study reducibility of examples in several typical inductive logic programming benchmarks. The notion of reducibility that we use is related to theta-reduction, commonly used to reduce hypotheses in ILP. Whereas examples are usually not reducible on their own, they often become implicitly reducible when language for constructing hypotheses is fixed.We show that number of ground facts in a dataset can be almost halved for some real-world molecular datasets. Furthermore, we study the impact this has on a popular ILP system Aleph.