Foundations of logic programming; (2nd extended ed.)
Foundations of logic programming; (2nd extended ed.)
Generalized subsumption and its applications to induction and redundancy
Artificial Intelligence
Learning Conjunctions of Horn Clauses
Machine Learning - Computational learning theory
Revising the logical foundations of inductive logic programming systems with ground reduced programs
New Generation Computing - Special issue on inductive logic programming 97
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
Learning Acyclic First-Order Horn Sentences from Entailment
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Which Hypotheses Can Be Found with Inverse Entailment?
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
Pac-learning recursive logic programs: efficient algorithms
Journal of Artificial Intelligence Research
Pac-learning recursive logic programs: negative results
Journal of Artificial Intelligence Research
Hypotheses Finding via Residue Hypotheses with the Resolution Principle
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
A Theory of Hypothesis Finding in Clausal Logic
Progress in Discovery Science, Final Report of the Japanese Discovery Science Project
Hi-index | 0.00 |
This research is aimed at giving a bridge between the two research areas, Inductive Logic Programming and Computational Learning. We focus our attention on four fittings (learning methods) invented in the two areas: Saturant Generalization, V*-operation with Generalization, Bottom Generalization, and Inverse Entailment. Firstly we show that each of them can be represented as an instance of a common schema. Secondly we compare the four fittings. By modifying Jung's result, we show that all definite hypotheses derived by V*-operation with Generalization can be derived by Bottom Generalization and vice versa, but that some hypotheses cannot be derived by Saturant Generalization. We also give a hypotheses of a general clause which can be derived Bottom Generalization but not by V*-operation with Generalization. We show Inverse Entailment is more powerful than other three fittings both in definite and in general clausal logic. In our papers presented at the IJCAI'97 workshops and the 7th ILP workshop, Bottom Generalization was called "Inverse Entailment," but after the workshops we found it differs from Muggleton's original Inverse Entailment. We renamed it "Bottom Generalization" in order to reduce confusion and allow fair comparison of the fitting to others.