CNS '97 Proceedings of the sixth annual conference on Computational neuroscience : trends in research, 1998: trends in research, 1998
Neural Computation
Polychronization: Computation with Spikes
Neural Computation
Neurons Tune to the Earliest Spikes Through STDP
Neural Computation
Spike-Timing-Dependent Hebbian Plasticity as Temporal Difference Learning
Neural Computation
Spike-timing-dependent plasticity for neurons with recurrent connections
Biological Cybernetics
A GALS Infrastructure for a Massively Parallel Multiprocessor
IEEE Design & Test
Optimal plasticity from matrix memories: What goes up must come down
Neural Computation
Spike-timing-dependent plasticity in small-world networks
Neurocomputing
Competitive stdp-based spike pattern learning
Neural Computation
Implementing Learning on the SpiNNaker Universal Neural Chip Multiprocessor
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part I
Scalable event-driven native parallel processing: the SpiNNaker neuromimetic system
Proceedings of the 7th ACM international conference on Computing frontiers
Simple model of spiking neurons
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A hierachical configuration system for a massively parallel neural hardware platform
Proceedings of the 9th conference on Computing Frontiers
Hi-index | 0.00 |
Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible ''neuromimetic'' architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that the STDP TTS algorithm allows the neural network to adapt and detect the incoming pattern with improvements both in the reliability of, and the time required for, consistent output. Through the approximations we suggest in this paper, we introduce a learning rule that is easy to implement both in event-driven simulators and in dedicated hardware, reducing computational complexity relative to the standard STDP rule. Such a rule offers a promising solution, complementary to standard STDP evaluation algorithms, for real-time learning using spiking neural networks in time-critical applications.