Quantified Score

Hi-index 0.00

Visualization

Abstract

Predictive learning rules, where synaptic changes are driven by the difference between a random input and its reconstruction derived from internal variables, have proven to be very stable and efficient. However, it is not clear how such learning rules could take place in biological synapses. Here we propose an implementation that exploits the synchronization of neural activities within a recurrent network. In this framework, the asymmetric shape of spike-timing-dependent plasticity (STDP) can be interpreted as a self-stabilizing mechanism. Our results suggest a novel hypothesis concerning the computational role of neural synchrony and oscillations.