Learning regular sets from queries and counterexamples
Information and Computation
A note on contextual binary feature grammars
CLAGI '09 Proceedings of the EACL 2009 Workshop on Computational Linguistic Aspects of Grammatical Inference
Distributional learning of some context-free languages with a minimally adequate teacher
ICGI'10 Proceedings of the 10th international colloquium conference on Grammatical inference: theoretical results and applications
Learning context free grammars with the syntactic concept lattice
ICGI'10 Proceedings of the 10th international colloquium conference on Grammatical inference: theoretical results and applications
Towards general algorithms for grammatical inference
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Theoretical Computer Science
A learnable representation for syntax using residuated lattices
FG'09 Proceedings of the 14th international conference on Formal grammar
Distributional learning of abstract categorial grammars
LACL'11 Proceedings of the 6th international conference on Logical aspects of computational linguistics
Towards dual approaches for learning context-free grammars based on syntactic concept lattices
DLT'11 Proceedings of the 15th international conference on Developments in language theory
Distributional learning of simple context-free tree grammars
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Logical grammars, logical theories
LACL'12 Proceedings of the 7th international conference on Logical Aspects of Computational Linguistics
Hi-index | 0.00 |
Recently several "distributional learning algorithms" have been proposed and have made great success in learning different subclasses of context-free grammars. The distributional learning models and exploits the relation between strings and contexts that form grammatical sentences in the language of the learning target. There are two main approaches. One, which we call primal, constructs nonterminals whose language is supposed to be characterized by strings. The other, which we call dual, uses contexts to characterize the language of each nonterminal of the conjecture grammar. This paper shows how those opposite approaches are integrated into single learning algorithms that learn quite rich classes of context-free grammars.