Adaptive filter theory (2nd ed.)
Adaptive filter theory (2nd ed.)
Weight Space Probability Densities in Stochastic Learning: II. Transients and Basin Hopping Times
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Self-organizing dual coding based on spike-time-dependent plasticity
Neural Computation
Systematic fluctuation expansion for neural network activity equations
Neural Computation
Stochastic perturbation methods for spike-timing-dependent plasticity
Neural Computation
Hi-index | 0.00 |
On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed. We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.