Information Processing Letters
The hardest constraint problems: a double phase transition
Artificial Intelligence
Easy problems are sometimes hard
Artificial Intelligence
An introduction to computational learning theory
An introduction to computational learning theory
Pruning Algorithms for Rule Learning
Machine Learning
Beyond NP: the QSAT phase transition
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Constructing an asymptotic phase transition in random binary constraint satisfaction problems
Theoretical Computer Science - Phase transitions in combinatorial problems
Learning Conjunctive Concepts in Structural Domains
Machine Learning
Phase Transitions and Stochastic Local Search in k-Term DNF Learning
ECML '02 Proceedings of the 13th European Conference on Machine Learning
A Pathology of Bottom-Up Hill-Climbing in Inductive Rule Learning
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
On the Complexity of Some Inductive Logic Programming Problems
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
STOC '84 Proceedings of the sixteenth annual ACM symposium on Theory of computing
Ilp: a short look back and a longer look forward
The Journal of Machine Learning Research
Relational learning as search in a critical region
The Journal of Machine Learning Research
Prospects and challenges for multi-relational data mining
ACM SIGKDD Explorations Newsletter
Human Problem Solving
Many hard examples in exact phase transitions
Theoretical Computer Science
Random constraint satisfaction: Easy generation of hard (satisfiable) instances
Artificial Intelligence
Extension of the Top-Down Data-Driven Strategy to ILP
Inductive Logic Programming
A Model to Study Phase Transition and Plateaus in Relational Learning
ILP '08 Proceedings of the 18th international conference on Inductive Logic Programming
A new look at the easy-hard-easy pattern of combinatorial search difficulty
Journal of Artificial Intelligence Research
Where the really hard problems are
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 1
A model for generating random quantified boolean formulas
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Learning discriminant rules as a minimal saturation search
ILP'10 Proceedings of the 20th international conference on Inductive logic programming
Hi-index | 0.00 |
Relational Learning (RL) has aroused interest to fill the gap between efficient attribute-value learners and growing applications stored in multi-relational databases. However, current systems use general- purpose problem solvers that do not scale-up well. This is in contrast with the past decade of success in combinatorics communities where studies of random problems, in the phase transition framework, allowed to evaluate and develop better specialised algorithms able to solve real-world applications up to millions of variables. A number of studies have been proposed in RL, like the analysis of the phase transition of a NP-complete sub-problem, the subsumption test, but none has directly studied the phase transition of RL. As RL, in general, is ${\it \Sigma}_2-hard$, we propose a first random problem generator, which exhibits the phase transition of its decision version, beyond NP. We study the learning cost of several learners on inherently easy and hard instances, and conclude on expected benefits of this new benchmarking tool for RL.