Competitive learning: from interactive activation to adaptive resonance
Connectionist models and their implications: readings from cognitive science
Recursive distributed representations
Artificial Intelligence - On connectionist symbol processing
Artificial Intelligence - On connectionist symbol processing
The appeal of parallel distributed processing
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Conceptual Spaces: The Geometry of Thought
Conceptual Spaces: The Geometry of Thought
Self-Organizing Maps
Holographic Reduced Representation: Distributed Representation for Cognitive Structures
Holographic Reduced Representation: Distributed Representation for Cognitive Structures
Co-evolution of language and of the language acquisition device
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Hi-index | 0.00 |
We present a neural-competitive learning model of language evolution in which several symbol sequences compete to signify a given propositional meaning. Both symbol sequences and propositional meanings are represented by high-dimensional vectors of real numbers. A neural network learns to map between the distributed representations of the symbol sequences and the distributed representations of the propositions. Unlike previous neural network models of language evolution, our model uses a Kohonen Self-Organizing Map with unsupervised learning, thereby avoiding the computational slowdown and biological implausibility of back-propagation networks and the lack of scalability associated with Hebbian-learning networks. After several evolutionary generations, the network develops systematically regular mappings between meanings and sequences, of the sort traditionally associated with symbolic grammars. Because of the potential of neural-like representations for addressing the symbol-grounding problem, this sort of model holds a good deal of promise as a new explanatory mechanism for both language evolution and acquisition.