Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Distributed Representations, Simple Recurrent Networks, And Grammatical Structure
Machine Learning - Connectionist approaches to language learning
Description-based parsing in a connectionist network
Description-based parsing in a connectionist network
Neural Computation
How to design a connectionist holistic parser
Neural Computation
Statistical Language Learning
Theory of Syntactic Recognition for Natural Languages
Theory of Syntactic Recognition for Natural Languages
Strong Semantic Systematicity from Hebbian Connectionist Learning
Minds and Machines
Natural Language Grammatical Inference with Recurrent Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
PCFG models of linguistic tree representations
Computational Linguistics
A connectionist architecture for learning to parse
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
On the efficient classification of data structures by neural networks
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Supervised neural networks for the classification of structures
IEEE Transactions on Neural Networks
A general framework for adaptive processing of data structures
IEEE Transactions on Neural Networks
Stability properties of labeling recursive auto-associative memory
IEEE Transactions on Neural Networks
Segmenting state into entities and its implication for learning
Emergent neural computational architectures based on neuroscience
Segmenting State into Entities and Its Implication for Learning
Emergent Neural Computational Architectures Based on Neuroscience - Towards Neuroscience-Inspired Computing
Wide Coverage Incremental Parsing by Learning Attachment Preferences
AI*IA 01 Proceedings of the 7th Congress of the Italian Association for Artificial Intelligence on Advances in Artificial Intelligence
Sequence Learning - Paradigms, Algorithms, and Applications
Broad-Coverage Parsing with Neural Networks
Neural Processing Letters
Neural network probability estimation for broad coverage parsing
EACL '03 Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 1
Inducing history representations for broad coverage statistical parsing
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
A neural network parser that handles sparse data
New developments in parsing technology
Discriminative training of a neural network statistical parser
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Knowledge representation with SOUL
Expert Systems with Applications: An International Journal
Structural bias in inducing representations for probabilistic natural language parsing
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Localist approach to natural language processing
TELE-INFO'06 Proceedings of the 5th WSEAS international conference on Telecommunications and informatics
Hi-index | 0.00 |
This article explores the use of Simple Synchrony Networks (SSNs) for learning to parse English sentences drawn from a corpus of naturally occurring text. Parsing natural language sentences requires taking a sequence of words and outputting a hierarchical structure representing how those words fit together to form constituents. Feed-forward and Simple Recurrent Networks have had great difficulty with this task, in part because the number of relationships required to specify a structure is too large for the number of unit outputs they have available. SSNs have the representational power to output the necessary $O(n^2)$ possible structural relationships because SSNs extend the $O(n)$ incremental outputs of Simple Recurrent Networks with the $O(n)$ entity outputs provided by Temporal Synchrony Variable Binding. This article presents an incremental representation of constituent structures which allows SSNs to make effective use of both these dimensions. Experiments on learning to parse naturally occurring text show that this output format supports both effective representation and effective generalization in SSNs. To emphasize the importance of this generalization ability, this article also proposes a short-term memory mechanism for retaining a bounded number of constituents during parsing. This mechanism improves the $O(n^2)$ speed of the basic SSN architecture to linear time, but experiments confirm that the generalization ability of SSN networks is maintained.