Communications of the ACM
Learning regular sets from queries and counterexamples
Information and Computation
Equivalence of models for polynomial learnability
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Learning regular languages from counterexamples
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Equivalence queries and approximate fingerprints
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Learning context-free grammars from structural data in polynomial time
Theoretical Computer Science
Journal of the ACM (JACM)
On the Covering and Reduction Problems for Context-Free Grammars
Journal of the ACM (JACM)
Noncounting Context-Free Languages
Journal of the ACM (JACM)
Inference of Reversible Languages
Journal of the ACM (JACM)
Machine Learning
Machine Learning
Inductive Inference, DFAs, and Computational Complexity
AII '89 Proceedings of the International Workshop on Analogical and Inductive Inference
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Learning one-counter languages in polynomial time
SFCS '87 Proceedings of the 28th Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
In this paper, we introduce a new normal form for context-free grammars, called reversible context-free grammars, for the problem of learning context-free grammars from positive-only examples. A context-free grammar G = (N, @S, P, S) is said to be reversible if (1) A - @a and B - @a in P implies A = B and (2) A - @aB@b and A - @aC@b in P implies B = C. We show that the class of reversible context-free grammars can be identified in the limit from positive samples of structural descriptions and there exists an efficient algorithm to identify them from positive samples of structural descriptions, where a structural description of a context-free grammar is an unlabelled derivation tree of the grammar. This implies that if positive structural examples of a reversible context-free grammar for the target language are available to the learning algorithm, the full class of context-free languages can be learned efficiently from positive samples.