Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
What is the goal of sensory coding?
Neural Computation
Neural Computation
A New Approach towards Vision Suggested by Biologically Realistic Neural Microcircuit Models
BMCV '02 Proceedings of the Second International Workshop on Biologically Motivated Computer Vision
Isolated word recognition with the liquid state machine: a case study
Information Processing Letters - Special issue on applications of spiking neural networks
Analysis and design of echo state networks
Neural Computation
Synergies Between Intrinsic and Synaptic Plasticity Mechanisms
Neural Computation
The introduction of time-scales in reservoir computing, applied to isolated digits recognition
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Memory in backpropagation-decorrelation O(N) efficient online recurrent learning
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
A gradient rule for the plasticity of a neuron’s intrinsic excitability
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
New results on recurrent network training: unifying the algorithms and accelerating convergence
IEEE Transactions on Neural Networks
Optimizing Generic Neural Microcircuits through Reward Modulated STDP
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Simple deterministically constructed recurrent neural networks
IDEAL'10 Proceedings of the 11th international conference on Intelligent data engineering and automated learning
Adaptive critic design with ESN critic for bioprocess optimization
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Architectural and Markovian factors of echo state networks
Neural Networks
ESN intrinsic plasticity versus reservoir stability
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Simple deterministically constructed cycle reservoirs with regular jumps
Neural Computation
Survey: Reservoir computing approaches to recurrent neural network training
Computer Science Review
Echo state networks for multi-dimensional data clustering
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Proceedings of the Fourth Symposium on Information and Communication Technology
Hi-index | 0.01 |
The benefits of using intrinsic plasticity (IP), an unsupervised, local, biologically inspired adaptation rule that tunes the probability density of a neuron's output towards an exponential distribution-thereby realizing an information maximization-have already been demonstrated. In this work, we extend the ideas of this adaptation method to a more commonly used non-linearity and a Gaussian output distribution. After deriving the learning rules, we show the effects of the bounded output of the transfer function on the moments of the actual output distribution. This allows us to show that the rule converges to the expected distributions, even in random recurrent networks. The IP rule is evaluated in a reservoir computing setting, which is a temporal processing technique which uses random, untrained recurrent networks as excitable media, where the network's state is fed to a linear regressor used to calculate the desired output. We present an experimental comparison of the different IP rules on three benchmark tasks with different characteristics. Furthermore, we show that this unsupervised reservoir adaptation is able to adapt networks with very constrained topologies, such as a 1D lattice which generally shows quite unsuitable dynamic behavior, to a reservoir that can be used to solve complex tasks. We clearly demonstrate that IP is able to make reservoir computing more robust: the internal dynamics can autonomously tune themselves-irrespective of initial weights or input scaling-to the dynamic regime which is optimal for a given task.