Macro-operators: a weak method for learning
Artificial Intelligence - Lecture notes in computer science 178
Theories for mutagenicity: a study in first-order and feature-based induction
Artificial Intelligence - Special volume on empirical methods
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
On the Stability of Example-Driven Learning Systems: A Case Study in Multirelational Learning
MICAI '02 Proceedings of the Second Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
Lookahead and Discretization in ILP
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
Detecting Traffic Problems with ILP
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Determinate literals in inductive logic programming
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2
Fast Theta-Subsumption with Constraint Satisfaction Algorithms
Machine Learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Minesweeper with multirelational learning
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Hi-index | 0.00 |
Refinement operators are frequently used in the area of multirelational learning (Inductive Logic Programming, ILP) in order to search systematically through a generality order on clauses for a correct theory. Only the clauses reachable by a finite number of applications of a refinement operator are considered by a learning system using this refinement operator; ie. the refinement operator determines the search space of the system. For efficiency reasons, we would like a refinement operator to compute the smallest set of clauses necessary to find a correct theory. In this paper we present a formal method based on macro-operators to reduce the search space defined by a downward refinement operator (驴) while finding the same theory as the original operator. Basically we define a refinement operator which adds to a clause not only single-literals but also automatically created sequences of literals (macro-operators). This in turn allows us to discard clauses which do not belong to a correct theory. Experimental results show that this technique significantly reduces the search-space and thus accelerates the learning process.