Sparse coding in the primate cortex
The handbook of brain theory and neural networks
Topographic Independent Component Analysis
Neural Computation
Synergies Between Intrinsic and Synaptic Plasticity Mechanisms
Neural Computation
A gradient rule for the plasticity of a neuron’s intrinsic excitability
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
Delay learning and polychronization for reservoir computing
Neurocomputing
A sparse generative model of v1 simple cells with intrinsic plasticity
Neural Computation
Maximal Causes for Non-linear Component Extraction
The Journal of Machine Learning Research
Expectation Truncation and the Benefits of Preselection In Training Generative Models
The Journal of Machine Learning Research
Survey: Reservoir computing approaches to recurrent neural network training
Computer Science Review
Hi-index | 0.01 |
Intrinsic plasticity (IP) refers to a neuron's ability to regulate its firing activity by adapting its intrinsic excitability. Previously, we showed that model neurons combining a model of IP based on information theory with Hebbian synaptic plasticity can adapt their weight vector to discover heavy-tailed directions in the input space. In this paper we show how a network of such units can solve a standard non-linear independent component analysis (ICA) problem. We also present a model for the formation of maps of oriented receptive fields in primary visual cortex and compare our results with those from ICA. Together, our results indicate that intrinsic plasticity that tries to locally maximize information transmission at the level of individual neurons may play an important role for the learning of efficient sensory representations in the cortex.