Neural networks discover a near-identity relation to distinguish simple syntactic forms

  • Authors:
  • Thomas R. Shultz;Alan C. Bale

  • Affiliations:
  • Department of Psychology, McGill University, Montreal, Canada H3A 1B1;Department of Linguistics, McGill University, Montreal, Canada H3A 1A7

  • Venue:
  • Minds and Machines
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501---536] covers the essential features驴of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77---80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and that they use this near-identity relation to distinguish sentences that are consistent or inconsistent with a familiar grammar. Recent simulations that were claimed to show that this model did not really learn these grammars [Vilcu, M., & Hadley, R. F. (2005). Minds and Machines, 15, 359---382] confounded syntactic types with speech sounds and did not perform standard statistical tests of results.