Communications of the ACM
Classifying learnable geometric concepts with the Vapnik-Chervonenkis dimension
STOC '86 Proceedings of the eighteenth annual ACM symposium on Theory of computing
On the learnability of Boolean formulae
STOC '87 Proceedings of the nineteenth annual ACM symposium on Theory of computing
Separating PAC and mistake-bound learning models over the Boolean domain (abstract)
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Learning theories in a subset of a polyadic logic
COLT '88 Proceedings of the first annual workshop on Computational learning theory
PAC-learnability of determinate logic programs
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Algorithmic Program DeBugging
Learning Logical Definitions from Relations
Machine Learning
Background Knowledge and Declarative Bias in Inductive Concept Learning
AII '92 Proceedings of the International Workshop on Analogical and Inductive Inference
Hi-index | 0.00 |
A crucial problem in "inductive logic programming" is learning recursive logic programs from examples alone; current systems such as GOLEM and FOIL often achieve success only for carefully selected sets of examples. We describe a program called FORCE2 that uses the new technique of "forced simulation" to learn twoclause "closed" linear recursive ij-determinate programs; although this class of programs is fairly restricted, it does include most of the standard benchmark problems. Experimentally, FORCE2 requires fewer examples than FOIL, and is more accurate when learning from randomly chosen datasets. Formally, FORCE2 is also shown to be a pac-learning algorithm in a variant of Valiant's [1984] model, in which we assume the ability to make two types of queries: one which gives an upper bound on the depth of the proof for an example, and one which determines if an example can be proved in unit depth.