Communications of the ACM
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
On the complexity of inductive inference
Information and Control
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Pattern languages are not learnable
COLT '90 Proceedings of the third annual workshop on Computational learning theory
A polynomial-time algorithm for learning k-variable pattern languages from examples
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Identification of unions of languages drawn from an identifiable class
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Polynomial-time inference of arbitrary pattern languages
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
Inductive inference of monotonic formal systems from positive data
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
Equivalence of models for polynomial learnability
Information and Computation
Learning elementary formal systems
Theoretical Computer Science
Exact identification of read-once formulas using fixed points of amplification functions
SIAM Journal on Computing
Language learning in dependence on the space of hypotheses
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Bayesian inductive logic programming
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Characterizations of monotonic and dual monotonic language learning
Information and Computation
Applications of inductive logic programming
Communications of the ACM
Incremental learning from positive data
Journal of Computer and System Sciences
Learning one-variable pattern languages in linear average time
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Incremental concept learning for bounded data mining
Information and Computation
An average-case optimal one-variable pattern language learner
Journal of Computer and System Sciences - Eleventh annual conference on computational learning theory&slash;Twelfth Annual IEEE conference on computational complexity
Polynomial-time learning of elementary formal systems
New Generation Computing
Inductive Inference: Theory and Methods
ACM Computing Surveys (CSUR)
Theoretical Computer Science
Machine Learning
Inductive Logic Programming: Techniques and Applications
Inductive Logic Programming: Techniques and Applications
Annals of Mathematics and Artificial Intelligence
Stochastic Finite Learning of the Pattern Languages
Machine Learning
Inclusion is Undecidable for Pattern Languages
ICALP '93 Proceedings of the 20th International Colloquium on Automata, Languages and Programming
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
A Guided Tour Across the Boundaries of Learning Recursive Languages
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Inductive Inference, DFAs, and Computational Complexity
AII '89 Proceedings of the International Workshop on Analogical and Inductive Inference
Inductive Inference of Unbounded Unions of Pattern Languages from Positive Data
ALT '96 Proceedings of the 7th International Workshop on Algorithmic Learning Theory
A complete and tight average-case analysis of learning monomials
STACS'99 Proceedings of the 16th annual conference on Theoretical aspects of computer science
Learning indexed families of recursive languages from positive data: A survey
Theoretical Computer Science
Developments from enquiries into the learnability of the pattern languages from positive data
Theoretical Computer Science
Hi-index | 0.00 |
Inductive inference can be considered as one of the fundamental paradigms of algorithmic learning theory. We survey results recently obtained and show their impact to potential applications.Since the main focus is put on the efficiency of learning, we also deal with postulates of naturalness and their impact to the efficiency of limit learners. In particular, we look at the learnability of the class of all pattern languages and ask whether or not one can design a learner within the paradigm of learning in the limit that is nevertheless efficient.For achieving this goal, we deal with iterative learning and its interplay with the hypothesis spaces allowed. This interplay has also a severe impact to postulates of naturalness satisfiable by any learner.Furthermore, since a limit learner is only supposed to converge, one never knows at any particular learning stage whether or not the learner did already succeed. The resulting uncertainty may be prohibitive in many applications. We survey results to resolve this problem by outlining a new learning model, called stochastic finite learning. Though pattern languages can neither be finitely inferred from positive data nor PAC-learned, our approach can be extended to a stochastic finite learner that exactly infers all pattern languages from positive data with high confidence.Finally, we apply the techniques developed to the problem of learning conjunctive concepts.