Neural Computation
Statistically efficient estimation using population coding
Neural Computation
Probabilistic interpretation of population codes
Neural Computation
Mutual information, Fisher information, and population coding
Neural Computation
Neuronal tuning: to sharpen or broaden
Neural Computation
Narrow versus wide turning curves: what's best for a population code?
Neural Computation
Information Theory and the Brain
Information Theory and the Brain
Population coding and decoding in a neural field: a computational study
Neural Computation
Optimal short-term population coding: when fisher information fails
Neural Computation
Self-organizing continuous attractor networks and motor function
Neural Networks
Sequential Bayesian decoding with a population of neurons
Neural Computation
Bayesian computation in recurrent neural circuits
Neural Computation
Population Coding with Correlation and an Unfaithful Model
Neural Computation
Dynamics and computation of continuous attractors
Neural Computation
The Tracking Speed of Continuous Attractors
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Continuous attractors of a class of recurrent neural networks
Computers & Mathematics with Applications
Representations of continuous attractors of recurrent neural networks
IEEE Transactions on Neural Networks
Continuous Attractors of Lotka-Volterra Recurrent Neural Networks
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Continuous attractors of Lotka-Volterra recurrent neural networks with infinite neurons
IEEE Transactions on Neural Networks
Change-based inference in attractor nets: Linear analysis
Neural Computation
Continuous attractors of a class of neural networks with a large number of neurons
Computers & Mathematics with Applications
Neural information processing with feedback modulations
Neural Computation
Hi-index | 0.00 |
Two issues concerning the application of continuous attractors in neural systems are investigated: the computational robustness of continuous attractors with respect to input noises and the implementation of Bayesian online decoding. In a perfect mathematical model for continuous attractors, decoding results for stimuli are highly sensitive to input noises, and this sensitivity is the inevitable consequence of the system's neutral stability. To overcome this shortcoming, we modify the conventional network model by including extra dynamical interactions between neurons. These interactions vary according to the biologically plausible Hebbian learning rule and have the computational role of memorizing and propagating stimulus information accumulated with time. As a result, the new network model responds to the history of external inputs over a period of time, and hence becomes insensitive to short-term fluctuations. Also, since dynamical interactions provide a mechanism to convey the prior knowledge of stimulus, that is, the information of the stimulus presented previously, the network effectively implements online Bayesian inference. This study also reveals some interesting behavior in neural population coding, such as the trade-off between decoding stability and the speed of tracking time-varying stimuli, and the relationship between neural tuning width and the tracking speed.