Distributed Representations, Simple Recurrent Networks, And Grammatical Structure
Machine Learning - Connectionist approaches to language learning
Image segmentation based on oscillatory correlation
Neural Computation
How to design a connectionist holistic parser
Neural Computation
Incremental Syntactic Parsing of Natural Language Corpora with Simple Synchrony Networks
IEEE Transactions on Knowledge and Data Engineering
A connectionist architecture for learning to parse
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Towards novel neuroscience-inspired computing
Emergent neural computational architectures based on neuroscience
Hi-index | 0.00 |
Temporal synchrony of activation spikes has been proposed as the representational code by which the brain segments perceptual patterns into multiple visual objects or multiple auditory sources. In this chapter we look at the implications of this neuroscientific proposal for learning and computation in artificial neural networks. Previous work has defined an artificial neural network model which uses temporal synchrony to represent and learn about multiple entities (Simple Synchrony Networks). These networks Can store arbitrary amounts of information in their internal state by segmenting their representation of state into arbitrarily many entities. They can also generalize what they learn to larger internal states by learning generalizations about individual entities. These claims are empirical demonstrated through results on training a Simple Synchrony Network to do syntactic parsing of real natural language sentences.