Analysis of neural excitability and oscillations
Methods in neuronal modeling
Introduction to the theory of neural computation
Introduction to the theory of neural computation
The computational brain
Shunting inhibition does not have a divisive effect on firing rates
Neural Computation
A hierarchical model of binocular rivalry
Neural Computation
Integrate-and-fire neurons driven by correlated stochastic input
Neural Computation
Stationary Bumps in Networks of Spiking Neurons
Neural Computation
Motor Cortical Activity during Interception of Moving Targets
Journal of Cognitive Neuroscience
The high-conductance state of cortical networks
Neural Computation
Continuous attractors of a class of recurrent neural networks
Computers & Mathematics with Applications
Continuous Attractors of Lotka-Volterra Recurrent Neural Networks
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Continuous attractors of Lotka-Volterra recurrent neural networks with infinite neurons
IEEE Transactions on Neural Networks
Continuous attractors of a class of neural networks with a large number of neurons
Computers & Mathematics with Applications
Hi-index | 0.00 |
A bright red light may trigger a sudden motor action in a driver crossing an intersection: stepping at once on the brakes. The same red light, however, may be entirely inconsequential if it appears, say, inside a movie theater. Clearly, context determines whether a particular stimulus will trigger a motor response, but what is the neural correlate of this? How does the nervous system enable or disable whole networks so that they are responsive or not to a given sensory signal? Using theoretical models and computer simulations, I show that networks of neurons have a built-in capacity to switch between two types of dynamic state: one in which activity is low and approximately equal for all units, and another in which different activity distributions are possible and may even change dynamically. This property allows whole circuits to be turned on or off by weak, unstructured inputs. These results are illustrated using networks of integrate-and-fire neurons with diverse architectures. In agreement with the analytic calculations, a uniform background input may determine whether a random network has one or two stable firing levels; it may give rise to randomly alternating firing episodes in a circuit with reciprocal inhibition; and it may regulate the capacity of a center-surround circuit to produce either self-sustained activity or traveling waves. Thus, the functional properties of a network may be drastically modified by a simple, weak signal. This mechanism works as long as the network is able to exhibit stable firing states, or attractors.