Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural networks with dynamic synapses
Neural Computation
Spiking Neuron Models: An Introduction
Spiking Neuron Models: An Introduction
Learning Temporally Encoded Patterns in Networks of SpikingNeurons
Neural Processing Letters
Neural Computation
What Can a Neuron Learn with Spike-Timing-Dependent Plasticity?
Neural Computation
A Speech Recognition Method of Isolated Words Based on Modified LPC Cepstrum
GRC '07 Proceedings of the 2007 IEEE International Conference on Granular Computing
Optimal Hebbian learning: a probabilistic point of view
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
Adaptive co-ordinate transformation based on a spike timing-dependent plasticity learning paradigm
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Training Spiking Neuronal Networks With Applications in Engineering Tasks
IEEE Transactions on Neural Networks
Neural Processing Letters
Supervised learning in multilayer spiking neural networks
Neural Computation
International Journal of Speech Technology
Hi-index | 0.00 |
This paper presents a synaptic weight association training (SWAT) algorithm for spiking neural networks (SNNs). SWAT merges the Bienenstock-Cooper-Munro (BCM) learning rule with spike timing dependent plasticity (STDP). The STDP/BCM rule yields a unimodal weight distribution where the height of the plasticity window associated with STDP is modulated causing stability after a period of training. The SNN uses a single training neuron in the training phase where data associated with all classes is passed to this neuron. The rule then maps weights to the classifying output neurons to reflect similarities in the data across the classes. The SNN also includes both excitatory and inhibitory facilitating synapses which create a frequency routing capability allowing the information presented to the network to be routed to different hidden layer neurons. A variable neuron threshold level simulates the refractory period. SWAT is initially benchmarked against the nonlinearly separable Iris and Wisconsin Breast Cancer datasets. Results presented show that the proposed training algorithm exhibits a convergence accuracy of 95.5% and 96.2% for the Iris and Wisconsin training sets, respectively, and 95.3% and 96.7% for the testing sets, noise experiments show that SWAT has a good generalization capability. SWAT is also benchmarked using an isolated digit automatic speech recognition (ASR) system where a subset of the TI46 speech corpus is used. Results show that with SWAT as the classifier, the ASR system provides an accuracy of 98.875% for training and 95.25% for testing.