Automated Refinement of First-Order Horn-Clause Domain Theories
Machine Learning
Theories for mutagenicity: a study in first-order and feature-based induction
Artificial Intelligence - Special volume on empirical methods
Learning Decision Rules by Randomized Iterative Local Search
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Carcinogenesis Predictions Using ILP
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
Randomised restarted search in ILP
Machine Learning
Using the Bottom Clause and Mode Declarations on FOL Theory Revision from Examples
ILP '08 Proceedings of the 18th international conference on Inductive Logic Programming
Chess revision: acquiring the rules of chess variants through FOL theory revision from examples
ILP'09 Proceedings of the 19th international conference on Inductive logic programming
Learning theories using estimation distribution algorithms and (reduced) bottom clauses
ILP'11 Proceedings of the 21st international conference on Inductive Logic Programming
Hi-index | 0.00 |
First-Order Theory Revision from Examples is the process of improving user-defined or automatically generated First-Order Logic (FOL) theories, given a set of examples. So far, the usefulness of Theory Revision systems has been limited by the cost of searching the huge search spaces they generate. This is a general difficulty when learning FOL theories but recent work showed that Stochastic Local Search (SLS) techniques may be effective, at least when learning FOL theories from scratch. Motivated by these results, we propose novel SLS based search strategies for First-Order Theory Revision from Examples. Experimental results show that introducing stochastic search significantly speeds up the runtime performance and improve accuracy.