Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
On the computational power of neural nets
Journal of Computer and System Sciences
Neural networks: a systematic introduction
Neural networks: a systematic introduction
Spiking Neuron Models: An Introduction
Spiking Neuron Models: An Introduction
Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons
Neural Computation
What Can a Neuron Learn with Spike-Timing-Dependent Plasticity?
Neural Computation
Neural Computation
Obstacle to training SpikeProp networks: cause of surges in training process
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
A gradient descent rule for spiking neurons emitting multiple spikes
Information Processing Letters - Special issue on applications of spiking neural networks
Analysis of the ReSuMe Learning Process For Spiking Neural Networks
International Journal of Applied Mathematics and Computer Science - Special Section: Selected Topics in Biological Cybernetics, Special Editors: Andrzej Kasiński and Filip Ponulak
SWAT: a spiking neural network training algorithm for classification problems
IEEE Transactions on Neural Networks
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
Learning long-term dependencies with gradient descent is difficult
IEEE Transactions on Neural Networks
Classification of distorted patterns by feed-forward spiking neural networks
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Supervised learning in multilayer spiking neural networks
Neural Computation
Hi-index | 0.00 |
Few algorithms for supervised training of spiking neural networks exist that can deal with patterns of multiple spikes, and their computational properties are largely unexplored. We demonstrate in a set of simulations that the ReSuMe learning algorithm can successfully be applied to layered neural networks. Input and output patterns are encoded as spike trains of multiple precisely timed spikes, and the network learns to transform the input trains into target output trains. This is done by combining the ReSuMe learning algorithm with multiplicative scaling of the connections of downstream neurons. We show in particular that layered networks with one hidden layer can learn the basic logical operations, including Exclusive-Or, while networks without hidden layer cannot, mirroring an analogous result for layered networks of rate neurons. While supervised learning in spiking neural networks is not yet fit for technical purposes, exploring computational properties of spiking neural networks advances our understanding of how computations can be done with spike trains.