A Polynomial Algorithm for the Inference of Context Free Languages
ICGI '08 Proceedings of the 9th international colloquium on Grammatical Inference: Algorithms and Applications
Learning Left-to-Right and Right-to-Left Iterative Languages
ICGI '08 Proceedings of the 9th international colloquium on Grammatical Inference: Algorithms and Applications
Identification in the Limit of k,l-Substitutable Context-Free Languages
ICGI '08 Proceedings of the 9th international colloquium on Grammatical Inference: Algorithms and Applications
Learning efficiency of very simple grammars from positive data
Theoretical Computer Science
A note on contextual binary feature grammars
CLAGI '09 Proceedings of the EACL 2009 Workshop on Computational Linguistic Aspects of Grammatical Inference
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Distributional learning of some context-free languages with a minimally adequate teacher
ICGI'10 Proceedings of the 10th international colloquium conference on Grammatical inference: theoretical results and applications
Learning context free grammars with the syntactic concept lattice
ICGI'10 Proceedings of the 10th international colloquium conference on Grammatical inference: theoretical results and applications
PAC-learning unambiguous k, l-NTS≤languages
ICGI'10 Proceedings of the 10th international colloquium conference on Grammatical inference: theoretical results and applications
ICGI'10 Proceedings of the 10th international colloquium conference on Grammatical inference: theoretical results and applications
Towards general algorithms for grammatical inference
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Using Contextual Representations to Efficiently Learn Context-Free Languages
The Journal of Machine Learning Research
Theoretical Computer Science
A learnable representation for syntax using residuated lattices
FG'09 Proceedings of the 14th international conference on Formal grammar
Formal and empirical grammatical inference
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts of ACL 2011
Tarski's principle, categorial grammars and learnability
LATA'11 Proceedings of the 5th international conference on Language and automata theory and applications
Distributional learning of abstract categorial grammars
LACL'11 Proceedings of the 6th international conference on Logical aspects of computational linguistics
Towards dual approaches for learning context-free grammars based on syntactic concept lattices
DLT'11 Proceedings of the 15th international conference on Developments in language theory
Research on Language and Computation
A language theoretic approach to syntactic structure
MOL'11 Proceedings of the 12th biennial conference on The mathematics of language
Distributional learning of simple context-free tree grammars
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Three learnable models for the description of language
LATA'10 Proceedings of the 4th international conference on Language and Automata Theory and Applications
Learning in the limit with lattice-structured hypothesis spaces
Theoretical Computer Science
Polynomial time learning of some multiple context-free languages with a minimally adequate teacher
FG'10/FG'11 Proceedings of the 15th and 16th international conference on Formal Grammar
Hi-index | 0.00 |
This paper formalises the idea of substitutability introduced by Zellig Harris in the 1950s and makes it the basis for a learning algorithm from positive data only for a subclass of context-free languages. We show that there is a polynomial characteristic set, and thus prove polynomial identification in the limit of this class. We discuss the relationship of this class of languages to other common classes discussed in grammatical inference. It transpires that it is not necessary to identify constituents in order to learn a context-free language—it is sufficient to identify the syntactic congruence, and the operations of the syntactic monoid can be converted into a context-free grammar. We also discuss modifications to the algorithm that produces a reduction system rather than a context-free grammar, that will be much more compact. We discuss the relationship to Angluin's notion of reversibility for regular languages. We also demonstrate that an implementation of this algorithm is capable of learning a classic example of structure dependent syntax in English: this constitutes a refutation of an argument that has been used in support of nativist theories of language.