Learning regular sets from queries and counterexamples
Information and Computation
Learning simple concepts under simple distributions
SIAM Journal on Computing
Efficient learning of context-free grammars from positive structural examples
Information and Computation
Inference of Reversible Languages
Journal of the ACM (JACM)
Inferring pure context-free languages from positive data
Acta Cybernetica
ICGI '98 Proceedings of the 4th International Colloquium on Grammatical Inference
Learning k-Reversible Context-Free Grammars from Positive Structural Examples
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Stochastic Inference of Regular Tree Languages
ICGI '98 Proceedings of the 4th International Colloquium on Grammatical Inference
PAC Learning with Simple Examples
STACS '96 Proceedings of the 13th Annual Symposium on Theoretical Aspects of Computer Science
Learning words from sights and sounds: a computational model
Learning words from sights and sounds: a computational model
Grounding knowledge in sensors: unsupervised learning for language and planning
Grounding knowledge in sensors: unsupervised learning for language and planning
Learning Meaning Before Syntax
ICGI '08 Proceedings of the 9th international colloquium on Grammatical Inference: Algorithms and Applications
Processing methods for contextual coherence interpretation related to reputation
ISCIT'09 Proceedings of the 9th international conference on Communications and information technologies
An Architecture for Bootstrapping Lexical Semantics and Grammatical Structures
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 03
Inferring grammars for mildly context sensitive languages in polynomial-time
ICGI'06 Proceedings of the 8th international conference on Grammatical Inference: algorithms and applications
Hi-index | 0.00 |
Context-free grammars cannot be identified in the limit from positive examples (Gold 1967), yet natural language grammars are more powerful than context-free grammars and humans learn them with remarkable ease from positive examples (Marcus 1993). Identifiability results for formal languages ignore a potentially powerful source of information available to learners of natural languages, namely, meanings. This paper explores the learnability of syntax (i.e. context-free grammars) given positive examples and knowledge of lexical semantics, and the learnability of lexical semantics given knowledge of syntax. The long-term goal is to develop an approach to learning both syntax and semantics that bootstraps itself, using limited knowledge about syntax to infer additional knowledge about semantics, and limited knowledge about semantics to infer additional knowledge about syntax.