Spiking Neuron Models: An Introduction
Spiking Neuron Models: An Introduction
What Can a Neuron Learn with Spike-Timing-Dependent Plasticity?
Neural Computation
Spike-Timing-Dependent Hebbian Plasticity as Temporal Difference Learning
Neural Computation
Type i membranes, phase resetting curves, and synchrony
Neural Computation
Lower bounds for the computational power of networks of spiking neurons
Neural Computation
Spike-timing error backpropagation in theta neuron networks
Neural Computation
Hi-index | 0.00 |
Predictive learning rules, where synaptic changes are driven by the difference between a random input and its reconstruction derived from internal variables, have proven to be very stable and efficient. However, it is not clear how such learning rules could take place in biological synapses. Here we propose an implementation that exploits the synchronization of neural activities within a recurrent network. In this framework, the asymmetric shape of spike-timing-dependent plasticity (STDP) can be interpreted as a self-stabilizing mechanism. Our results suggest a novel hypothesis concerning the computational role of neural synchrony and oscillations.