Hi-index | 0.00 |
We explore the role of local plasticity rules in learning statistical priors in a self-organizing recurrent neural network (SORN). The network receives input sequences composed of different symbols and learns the structure embedded in these sequences via a simple spike-timing-dependent plasticity rule, while synaptic normalization and intrinsic plasticity maintain a low level of activity. After learning, the network exhibits spontaneous activity that matches the stimulus-evoked activity during training and thus can be interpreted as samples from the network's prior probability distribution over evoked activity states. Further, we show how learning the frequency and spatio-temporal characteristics of the input sequences influences network performance in several classification tasks. These results suggest a novel connection between low level learning mechanisms and high level concepts of statistical inference.