Inductive inference of monotonic formal systems from positive data
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
PAC-learnability of determinate logic programs
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
First-order jk-clausal theories are PAC-learnable
Artificial Intelligence
Pac-learning non-recursive Prolog clauses
Artificial Intelligence
Learning first order universal Horn expressions
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Some Lower Bounds for the Computational Complexity of Inductive Logic Programming
ECML '93 Proceedings of the European Conference on Machine Learning
Mind Change Complexity of Learning Logic Programs
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Generalized Unification as Background Knowledge in Learning Logic Programs
ALT '93 Proceedings of the 4th International Workshop on Algorithmic Learning Theory
A Class of Prolog Programs Inferable from Positive Data
ALT '96 Proceedings of the 7th International Workshop on Algorithmic Learning Theory
Learning Acyclic First-Order Horn Sentences from Entailment
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Learning from Entailment of Logic Programs with Local Variables
ALT '98 Proceedings of the 9th International Conference on Algorithmic Learning Theory
Pac-learning recursive logic programs: efficient algorithms
Journal of Artificial Intelligence Research
A class of prolog programs with non-linear outputs inferable from positive data
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
Shinohara, Arimura, and Krishna Rao have shown learnability in the limit of minimal models of classes of logic programs from positive only data. In most cases, these results involve logic programs in which the "size" of the head yields a bound on the size of the body literals. However, when local variables are present, such a bound on body literal size cannot directly be ensured. The above authors achieve such a restriction using technical notions like mode and linear inequalities. The present paper develops a conceptually clean framework where the behavior of local variables is controlled by nonlocal ones. It is shown that for certain classes of logic programs, learnablity from positive data is equivalent to limiting identification of bounds for the number of clauses and the number of local variables. This reduces the learning problem finding two integers. This cleaner framework generalizes all the known results and establishes learnability of new classes.