On the computational power of neural nets
Journal of Computer and System Sciences
Extraction of rules from discrete-time recurrent neural networks
Neural Networks
Pulsed neural networks
Pulsed Neural Networks
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
Synapses as dynamic memory buffers
Neural Networks
Natural Language Grammatical Inference with Recurrent Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Spiking neurons and the induction of finite state machines
Theoretical Computer Science - Natural computing
Evolution of Spiking Neural Controllers for Autonomous Vision-Based Robots
ER '01 Proceedings of the International Symposium on Evolutionary Robotics From Intelligent Robotics to Artificial Life
From Wheels to Wings with Evolutionary Spiking Circuits
Artificial Life
Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review
Neural Computation
Decoding a Temporal Population Code
Neural Computation
Lower bounds for the computational power of networks of spiking neurons
Neural Computation
Error-backpropagation in networks of fractionally predictive spiking neurons
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Neural Processing Letters
Hi-index | 0.00 |
We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.