Strong Semantic Systematicity from Hebbian Connectionist Learning

  • Authors:
  • Robert F. Hadley;Michael B. Hayward

  • Affiliations:
  • School of Computing Science, Simon Fraser University, Burnaby, B.C., V5A 1S6, Canada (email: hadley@cs.sfu.ca);Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093-0515, USA (email: hayward@cogsci.uesd.edu)

  • Venue:
  • Minds and Machines
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Fodor‘s and Pylyshyn‘s stand on systematicity in thought and language hasbeen debated and criticized. Van Gelder and Niklasson, among others, haveargued that Fodor and Pylyshyn offer no precise definition of systematicity.However, our concern here is with a learning based formulation of thatconcept. In particular, Hadley has proposed that a network exhibits strongsemantic systematicity when, as a result of training, it can assignappropriate meaning representations to novel sentences (both simple andembedded) which contain words in syntactic positions they did not occupyduring training. The experience of researchers indicates that strongsystematicity in any form is difficult to achieve in connectionist systems.Herein we describe a network which displays strong semanticsystematicity in response to Hebbian, connectionist training. Duringtraining, two-thirds of all nouns are presented only in a single syntacticposition (either as grammatical subject or object). Yet, during testing, thenetwork correctly interprets thousands of sentences containing those nounsin novel positions. In addition, the network generalizes to novel levels ofembedding. Successful training requires a, corpus of about 1000 sentences,and network training is quite rapid. The architecture and learningalgorithms are purely connectionist, but ’classical‘ insights arediscernible in one respect, viz, that complex semantic representationsspatially contain their semantic constituents. However, in other importantrespects, the architecture is distinctly non-classical.