Spike-Driven Synaptic Plasticity for Learning Correlated Patterns of Asynchronous Activity
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Encoding the Temporal Statistics of Markovian Sequences of Stimuli in Recurrent Neuronal Networks
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Spike-driven synaptic dynamics generating working memory states
Neural Computation
Attractor Networks for Shape Recognition
Neural Computation
Spike-Driven Synaptic Plasticity: Theory, Simulation, VLSI Implementation
Neural Computation
A Neural Network Architecture for Visual Selection
Neural Computation
Neural mechanism for stochastic behaviour during a competitive game
Neural Networks - 2006 Special issue: Neurobiology of decision making
Hebbian learning of context in recurrent neural networks
Neural Computation
Optimizing one-shot learning with binary synapses
Neural Computation
Capacity analysis in multi-state synaptic models: a retrieval probability perspective
Journal of Computational Neuroscience
The rise and fall of memory in a model of synaptic integration
Neural Computation
Hi-index | 0.00 |
We discuss the long term maintenance of acquired memory insynaptic connections of a perpetually learning electronic device.This is affected by ascribing each synapse a finite number ofstable states in which it can maintain for indefinitely longperiods. Learning uncorrelated stimuli is expressed as a stochasticprocess produced by the neural activities on the synapses. Inseveral interesting cases the stochastic process can be analyzed indetail, leading to a clarification of the performance of thenetwork, as an associative memory, during the process ofuninterrupted learning. The stochastic nature of the process andthe existence of an asymptotic distribution for the synaptic valuesin the network imply generically that the memory is a palimpsestbut capacity is as low as log N for a network of Nneurons. The only way we find for avoiding this tight constraint isto allow the parameters governing the learning process (the codinglevel of the stimuli; the transition probabilities for potentiationand depression and the number of stable synaptic levels) to dependon the number of neurons. It is shown that a network with synapsesthat have two stable states can dynamically learn with optimalstorage efficiency, be a palimpsest, and maintain its (associative)memory for an indefinitely long time provided the coding level islow and depression is equilibrated against potentiation. We suggestthat an option so easily implementable in material devices wouldnot have been overlooked by biology. Finally we discuss thestochastic learning on synapses with variable number of stablesynaptic states.