Networks of spiking neurons: the third generation of neural network models
Transactions of the Society for Computer Simulation International - Special issue: simulation methodology in transportation systems
Spiking Neuron Models: An Introduction
Spiking Neuron Models: An Introduction
Real-time computation at the edge of chaos in recurrent neural networks
Neural Computation
A gradient descent rule for spiking neurons emitting multiple spikes
Information Processing Letters - Special issue on applications of spiking neural networks
Isolated word recognition with the Liquid State Machine: a case study
Information Processing Letters - Special issue on applications of spiking neural networks
An analysis of noise in recurrent neural networks: convergence and generalization
IEEE Transactions on Neural Networks
Reservoir-based evolving spiking neural network for spatio-temporal pattern recognition
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
WCCI'12 Proceedings of the 2012 World Congress conference on Advances in Computational Intelligence
Constructing robust liquid state machines to process highly variable data streams
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Hi-index | 0.01 |
Liquid state machines (LSMs) exploit the power of recurrent spiking neural networks (SNNs) without training the SNN. Instead, LSMs randomly generate this network and then use it as a filter for a generic machine learner. Previous research has shown that LSMs can yield competitive results; however, the process can require numerous time consuming epochs before finding a viable filter. We have developed a method for iteratively refining these randomly generated networks, so that the LSM will yield a more effective filter in fewer epochs than the traditional method. We define a new metric for evaluating the quality of a filter before calculating the accuracy of the LSM. The LSM then uses this metric to drive a novel algorithm founded on principals integral to both Hebbian and reinforcement learning. We compare this new method with traditional LSMs across two artificial pattern recognition problems and two simplified problems derived from the TIMIT dataset. Depending on the problem, our method demonstrates improvements in accuracy of from 15 to almost 600%.