Parabolic bursting in an excitable system coupled with a slow oscillation
SIAM Journal on Applied Mathematics
Dynamics of the firing probability of noisy integrate-and-fire neurons
Neural Computation
Firing rate of the noisy quadratic integrate-and-fire neuron
Neural Computation
Type i membranes, phase resetting curves, and synchrony
Neural Computation
Brain-scale simulation of the neocortex on the IBM Blue Gene/L supercomputer
IBM Journal of Research and Development
A Bio-inspired Connectionist Architecture for Visual Classification of Moving Objects
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Bio-inspired Connectionist Architecture for Visual Detection and Refinement of Shapes
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Memory capacities for synaptic and structural plasticity
Neural Computation
A bio-inspired connectionist approach for motion description through sequences of images
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Neuromimetic indicators for visual perception of motion
BVAI'07 Proceedings of the 2nd international conference on Advances in brain, vision and artificial intelligence
A model for delay activity without recurrent excitation
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
The hippocampus as a stable memory allocator for cortex
Neural Computation
Inhibition enhances memory capacity: optimal feedback, transient replay and oscillations
Journal of Computational Neuroscience
Hi-index | 0.00 |
Cortical neurons are predominantly excitatory and highly interconnected. In spite of this, the cortex is remarkably stable: normal brains do not exhibit the kind of runaway excitation one might expect of such a system. How does the cortex maintain stability in the face of this massive excitatory feedback? More importantly, how does it do so during computations, which necessarily involve elevated firing rates? Here we address these questions in the context of attractor networks--networks that exhibit multiple stable states, or memories. We find that such networks can be stabilized at the relatively low firing rates observed in vivo if two conditions are met: (1) the background state, where all neurons are firing at low rates, is inhibition dominated, and (2) the fraction of neurons involved in a memory is above some threshold, so that there is sufficient coupling between the memory neurons and the background. This allows "dynamical stabilization" of the attractors, meaning feedback from the pool of background neurons stabilizes what would otherwise be an unstable state. We suggest that dynamical stabilization may be a strategy used for a broad range of computations, not just those involving attractors.