Mechanisms of sentence processing: assigning roles to constituents
Parallel distributed processing: explorations in the microstructure of cognition, vol. 2
Connectionism and cognitive architecture: a critical analysis
Connections and symbols
Recursive distributed representations
Artificial Intelligence - On connectionist symbol processing
Artificial Intelligence - On connectionist symbol processing
Learning and applying contextual constraints in sentence comprehension
Artificial Intelligence - On connectionist symbol processing
PDP models and general issues in cognitive science
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Beyond associative memories: logics and variables in connectionist models
Information Sciences: an International Journal
Exhibiting versus Explaining Systematicity: A Reply to Hadley and Hayward^1
Minds and Machines
Explaining Systematicity: A Reply to Kenneth Aizawa
Minds and Machines
Does Classicism Explain Universality?
Minds and Machines
Incremental Syntactic Parsing of Natural Language Corpora with Simple Synchrony Networks
IEEE Transactions on Knowledge and Data Engineering
On The Proper Treatment of Semantic Systematicity
Minds and Machines
Synchronous versus conjunctive binding: a false dichotomy?
Connection Science
The problem of rapid variable creation
Neural Computation
Hi-index | 0.00 |
Fodor‘s and Pylyshyn‘s stand on systematicity in thought and language hasbeen debated and criticized. Van Gelder and Niklasson, among others, haveargued that Fodor and Pylyshyn offer no precise definition of systematicity.However, our concern here is with a learning based formulation of thatconcept. In particular, Hadley has proposed that a network exhibits strongsemantic systematicity when, as a result of training, it can assignappropriate meaning representations to novel sentences (both simple andembedded) which contain words in syntactic positions they did not occupyduring training. The experience of researchers indicates that strongsystematicity in any form is difficult to achieve in connectionist systems.Herein we describe a network which displays strong semanticsystematicity in response to Hebbian, connectionist training. Duringtraining, two-thirds of all nouns are presented only in a single syntacticposition (either as grammatical subject or object). Yet, during testing, thenetwork correctly interprets thousands of sentences containing those nounsin novel positions. In addition, the network generalizes to novel levels ofembedding. Successful training requires a, corpus of about 1000 sentences,and network training is quite rapid. The architecture and learningalgorithms are purely connectionist, but ’classical‘ insights arediscernible in one respect, viz, that complex semantic representationsspatially contain their semantic constituents. However, in other importantrespects, the architecture is distinctly non-classical.