Learning in neural networks with material synapses

  • Authors:
  • Daniel J. Amit;Stefano Fusi

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

We discuss the long term maintenance of acquired memory insynaptic connections of a perpetually learning electronic device.This is affected by ascribing each synapse a finite number ofstable states in which it can maintain for indefinitely longperiods. Learning uncorrelated stimuli is expressed as a stochasticprocess produced by the neural activities on the synapses. Inseveral interesting cases the stochastic process can be analyzed indetail, leading to a clarification of the performance of thenetwork, as an associative memory, during the process ofuninterrupted learning. The stochastic nature of the process andthe existence of an asymptotic distribution for the synaptic valuesin the network imply generically that the memory is a palimpsestbut capacity is as low as log N for a network of Nneurons. The only way we find for avoiding this tight constraint isto allow the parameters governing the learning process (the codinglevel of the stimuli; the transition probabilities for potentiationand depression and the number of stable synaptic levels) to dependon the number of neurons. It is shown that a network with synapsesthat have two stable states can dynamically learn with optimalstorage efficiency, be a palimpsest, and maintain its (associative)memory for an indefinitely long time provided the coding level islow and depression is equilibrated against potentiation. We suggestthat an option so easily implementable in material devices wouldnot have been overlooked by biology. Finally we discuss thestochastic learning on synapses with variable number of stablesynaptic states.