Automated Refinement of First-Order Horn-Clause Domain Theories
Machine Learning
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
Relational Data Mining
Knowledge Acquisition and Machine Learning
Knowledge Acquisition and Machine Learning
Learning Logical Definitions from Relations
Machine Learning
RUTH: an ILP Theory Revision System
ISMIS '94 Proceedings of the 8th International Symposium on Methodologies for Intelligent Systems
Inference for the Generalization Error
Machine Learning
Algorithmic program debugging
A Distribution Design Methodology for Object DBMS
Distributed and Parallel Databases
Logical and Relational Learning: From ILP to MRDM (Cognitive Technologies)
Logical and Relational Learning: From ILP to MRDM (Cognitive Technologies)
Integrating Naïve Bayes and FOIL
The Journal of Machine Learning Research
Structured machine learning: the next ten years
Machine Learning
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Revising first-order logic theories from examples through stochastic local search
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
ECML'05 Proceedings of the 16th European conference on Machine Learning
Machine learning for systems biology
ILP'05 Proceedings of the 15th international conference on Inductive Logic Programming
Chess revision: acquiring the rules of chess variants through FOL theory revision from examples
ILP'09 Proceedings of the 19th international conference on Inductive logic programming
Hi-index | 0.01 |
Theory revision systems are designed to improve the accuracy of an initial theory, producing more accurate and comprehensible theories than purely inductive methods. Such systems search for points where examples are misclassified and modify them using revision operators. This includes trying to add antecedents to clauses usually following a top-down approach, considering all the literals of the knowledge base. Such an approach leads to a huge search space which dominates the cost of the revision process. ILP Mode Directed Inverse Entailment systems restrict the search for antecedents to the literals of the bottom clause. In this work the bottom clause and mode declarations are introduced in a first-order logic theory revision system aiming to improve the efficiency of the antecedent addition operation and, consequently, also of the whole revision process. Experimental results compared to revision system FORTE show that the revision process is on average 55 times faster, generating more comprehensible theories and still not significantly decreasing the accuracies obtained by the original revision process. Moreover, the results show that when the initial theory is approximately correct, it is more efficient to revise it than learn from scratch, obtaining significantly better accuracies. They also show that using the proposed theory revision system to induce theories from scratch is faster and generates more compact theories than when the theory is induced using a traditional ILP system, obtaining competitive accuracies.