Macro-operators: a weak method for learning
Artificial Intelligence - Lecture notes in computer science 178
Depth-first iterative-deepening: an optimal admissible tree search
Artificial Intelligence
C4.5: programs for machine learning
C4.5: programs for machine learning
Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
An introduction to computational learning theory
An introduction to computational learning theory
Top-down induction of first-order logical decision trees
Artificial Intelligence
Algorithmic Program DeBugging
Phase Transitions in Relational Learning
Machine Learning
Learning Logical Definitions from Relations
Machine Learning
Learning Conjunctive Concepts in Structural Domains
Machine Learning
Machine Learning
Machine Learning
Constraint-based Learning of Long Relational Concepts
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Analyzing Relational Learning in the Phase Transition Framework
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
On the Stability of Example-Driven Learning Systems: A Case Study in Multirelational Learning
MICAI '02 Proceedings of the Second Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
On the Complexity of Some Inductive Logic Programming Problems
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
Lookahead and Discretization in ILP
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
Relational learning as search in a critical region
The Journal of Machine Learning Research
Human Problem Solving
Learning on the phase transition edge
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
On the Connection Between the Phase Transition of the Covering Test and the Learning Success Rate
Inductive Logic Programming
A Model to Study Phase Transition and Plateaus in Relational Learning
ILP '08 Proceedings of the 18th international conference on Inductive Logic Programming
Learning discriminant rules as a minimal saturation search
ILP'10 Proceedings of the 20th international conference on Inductive logic programming
Learning theories using estimation distribution algorithms and (reduced) bottom clauses
ILP'11 Proceedings of the 21st international conference on Inductive Logic Programming
Hi-index | 0.00 |
It is well-known that heuristic search in ILP is prone to plateau phenomena. An explanation can be given after the work of Giordana and Saitta: the ILP covering test is NP-complete and therefore exhibits a sharp phase transition in its coverage probability. As the heuristic value of a hypothesis depends on the number of covered examples, the regions "yes" and "no" represent plateaus that need to be crossed during search without an informative heuristic value. Several subsequent works have extensively studied this finding by running several learning algorithms on a large set of artificially generated problems and argued that the occurrence of this phase transition dooms every learning algorithm to fail to identify the target concept. We note however that only generate-and-test learning algorithms have been applied and that this conclusion has to be qualified in the case of data-driven learning algorithms. Mostly building on the pioneering work of Winston on near-miss examples, we show that, on the same set of problems, a top-down data-driven strategy can cross any plateau if near-misses are supplied in the training set, whereas they do not change the plateau profile and do not guide a generate-and-test strategy. We conclude that the location of the target concept with respect to the phase transition alone is not a reliable indication of the learning problem difficulty as previously thought.