Exploiting the deep structure of constraint problems
Artificial Intelligence
Experimental results on the crossover point in random 3-SAT
Artificial Intelligence - Special volume on frontiers in problem solving: phase transitions and complexity
Artificial Intelligence
Integrating Multiple Learning Strategies in First Order Logics
Machine Learning - Special issue on multistrategy learning
Learning Logical Definitions from Relations
Machine Learning
An Experimental Evaluation of Coevolutive Concept Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Tractable induction and classification in first order logic via stochastic matching
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Concept learning and the problem of small disjuncts
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
An experimental study of phase transitions in matching
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Where the really hard problems are
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 1
ILP '00 Proceedings of the 10th International Conference on Inductive Logic Programming
Learning first-order Bayesian networks
AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
Learning discriminant rules as a minimal saturation search
ILP'10 Proceedings of the 20th international conference on Inductive logic programming
Hi-index | 0.00 |
This paper focuses on a major step of machine learning, namely checking whether an example matches a candidate hypothesis. In relational learning, matching can be viewed as a Constraint Satisfaction Problem (CSP). The complexity of the task is analyzed in the Phase Transition framework, investigating the impact on the effectiveness of two relational learners: FOIL and G-NET. The critical factors of complexity, and their critical values, are experimentally investigated on artificial problems. This leads to distinguish several kinds of learning domains, depending on whether the target concept lies in the "mushy" region or not. Interestingly, experiments done with FOIL and G-NET show that both learners tend to induce hypotheses generating matching problems located inside the phase transition region, even if the constructed target concept lies far outside. Moreover, target concepts constructed too close to the phase transition are hard and both learners fail. The paper offers an explanation for this fact, and proposes a classification of learning domains and their hardness.