Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Foundations of logic programming; (2nd extended ed.)
Foundations of logic programming; (2nd extended ed.)
Identification of unions of languages drawn from an identifiable class
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Higher recursion theory
Inductive inference of monotonic formal systems from positive data
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
PAC-learnability of determinate logic programs
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning elementary formal systems
Theoretical Computer Science
On the role of procrastination in machine learning
Information and Computation
Rich classes inferable from positive data
Information and Computation
First-order jk-clausal theories are PAC-learnable
Artificial Intelligence
Pac-learning non-recursive Prolog clauses
Artificial Intelligence
Elementary formal systems, intrinsic complexity, and procrastination
Information and Computation
Generalized notions of mind change complexity
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
Learning first order universal Horn expressions
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Inductive Logic Programming: From Machine Learning to Software Engineering
Inductive Logic Programming: From Machine Learning to Software Engineering
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
Learning Conjunctive Concepts in Structural Domains
Machine Learning
Some Lower Bounds for the Computational Complexity of Inductive Logic Programming
ECML '93 Proceedings of the European Conference on Machine Learning
General Inductive Inference Types Based on Linearly-Ordered Sets
STACS '96 Proceedings of the 13th Annual Symposium on Theoretical Aspects of Computer Science
Not-So-Nearly-Minimal-Size Program Inference
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
The power of procrastination in inductive inference: How it depends on used ordinal notations
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Ordinal Mind Change Complexity of Language Identification
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
Generalized Unification as Background Knowledge in Learning Logic Programs
ALT '93 Proceedings of the 4th International Workshop on Algorithmic Learning Theory
A Class of Prolog Programs Inferable from Positive Data
ALT '96 Proceedings of the 7th International Workshop on Algorithmic Learning Theory
Learning from Entailment of Logic Programs with Local Variables
ALT '98 Proceedings of the 9th International Conference on Algorithmic Learning Theory
Pac-learning recursive logic programs: efficient algorithms
Journal of Artificial Intelligence Research
Pac-learning recursive logic programs: negative results
Journal of Artificial Intelligence Research
On Sufficient Conditions for Learnability of Logic Programs from Positive Data
ILP '99 Proceedings of the 9th International Workshop on Inductive Logic Programming
Hi-index | 0.00 |
The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin's notion of finite thickness and Wright's work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara's notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. More precisely, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m 0, the class of languages defined by formal systems of length ≤ m: - is identifiable in the limit from positive data with a mind change bound of ωm; - is identifiable in the limit from both positive and negative data with an ordinal mind change bound of ω × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro's linear programs, Arimura and Shinohara's depth-bounded linearly-covering programs, and Krishna Rao's depth-bounded linearly-moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.