Communications of the ACM
On the complexity of inductive inference
Information and Control
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
A guided tour of Chernoff bounds
Information Processing Letters
Prudence and other conditions on formal language learning
Information and Computation
Pattern languages are not learnable
COLT '90 Proceedings of the third annual workshop on Computational learning theory
A polynomial-time algorithm for learning k-variable pattern languages from examples
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Polynomial-time inference of arbitrary pattern languages
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
Equivalence of models for polynomial learnability
Information and Computation
Exact identification of read-once formulas using fixed points of amplification functions
SIAM Journal on Computing
Learning one-variable pattern languages in linear average time
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Annals of Mathematics and Artificial Intelligence
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
A Guided Tour Across the Boundaries of Learning Recursive Languages
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Inductive Inference, DFAs, and Computational Complexity
AII '89 Proceedings of the International Workshop on Analogical and Inductive Inference
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Monotonic Versus Nonmonotonic Language Learning
Proceedings of the Second International Workshop on Nonmonotonic and Inductive Logic
Formal languages and their relation to automata
Formal languages and their relation to automata
A complete and tight average-case analysis of learning monomials
STACS'99 Proceedings of the 16th annual conference on Theoretical aspects of computer science
SAGA '01 Proceedings of the International Symposium on Stochastic Algorithms: Foundations and Applications
From learning in the limit to stochastic finite learning
Theoretical Computer Science - Algorithmic learning theory
Learning a subclass of regular patterns in polynomial time
Theoretical Computer Science - Algorithmic learning theory
Learning indexed families of recursive languages from positive data: A survey
Theoretical Computer Science
Discontinuities in pattern inference
Theoretical Computer Science
Breaking Anonymity by Learning a Unique Minimum Hitting Set
CSR '09 Proceedings of the Fourth International Computer Science Symposium in Russia on Computer Science - Theory and Applications
Language structure using fuzzy similarity
IEEE Transactions on Fuzzy Systems
A bibliographical study of grammatical inference
Pattern Recognition
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Inductive inference and language learning
TAMC'06 Proceedings of the Third international conference on Theory and Applications of Models of Computation
Exploratory analysis system for semi-structured engineering logs
DAS'06 Proceedings of the 7th international conference on Document Analysis Systems
Patterns with bounded treewidth
LATA'12 Proceedings of the 6th international conference on Language and Automata Theory and Applications
Regular and context-free pattern languages over small alphabets
DLT'12 Proceedings of the 16th international conference on Developments in Language Theory
Fast learning of restricted regular expressions and DTDs
Proceedings of the 16th International Conference on Database Theory
Regular and context-free pattern languages over small alphabets
Theoretical Computer Science
Hi-index | 0.00 |
The present paper proposes a new learning model—called stochastic finite learning—and shows the whole class of pattern languages to be learnable within this model.This main result is achieved by providing a new and improved average-case analysis of the Lange–Wiehagen (New Generation Computing, 8, 361–370) algorithm learning the class of all pattern languages in the limit from positive data. The complexity measure chosen is the total learning time, i.e., the overall time taken by the algorithm until convergence. The expectation of the total learning time is carefully analyzed and exponentially shrinking tail bounds for it are established for a large class of probability distributions. For every pattern π containing k different variables it is shown that Lange and Wiehagen's algorithm possesses an expected total learning time of O(\hat\alpha^k \E\Lambda\log_{1/\beta}(k)), where \hat\alpha and β are two easily computable parameters arising naturally from the underlying probability distributions, and E[Λ] is the expected example string length.Finally, assuming a bit of domain knowledge concerning the underlying class of probability distributions, it is shown how to convert learning in the limit into stochastic finite learning.