Introduction to the theory of neural computation
Introduction to the theory of neural computation
Learning invariance from transformation sequences
Neural Computation
The perception of multiple objects: a connectionist approach
The perception of multiple objects: a connectionist approach
The role of the hippocampus in solving the Morris water maze
Neural Computation
Self-organizing continuous attractor networks and motor function
Neural Networks
Neural Networks - Special issue: Computational theories of the functions of the hippocampus
2005 Special issue: Interpreting hippocampal function as recoding and forecasting
Neural Networks - Special issue: Computational theories of the functions of the hippocampus
Emergence of attention within a neural population
Neural Networks
Hierarchical dynamical models of motor function
Neurocomputing
Continuous attractors of a class of recurrent neural networks
Computers & Mathematics with Applications
Continuous Attractors of Lotka-Volterra Recurrent Neural Networks
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
High-Order hopfield neural networks
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
Stochastic high-order hopfield neural networks
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
A distributed model of spatial visual attention
Biomimetic Neural Learning for Intelligent Robots
Hi-index | 0.00 |
'Continuous attractor' neural networks can maintain a localised packet of neuronal activity representing the current state of an agent in a continuous space without external sensory input. In applications such as the representation of head direction or location in the environment, only one packet of activity is needed. For some spatial computations a number of different locations, each with its own features, must be held in memory. We extend previous approaches to continuous attractor networks (in which one packet of activity is maintained active) by showing that a single continuous attractor network can maintain multiple packets of activity simultaneously, if each packet is in a different state space or map. We also show how such a network could by learning self-organise to enable the packets in each space to be moved continuously in that space by idiothetic (motion) inputs. We show how such multi-packet continuous attractor networks could be used to maintain different types of feature (such as form vs colour) simultaneously active in the correct location in a spatial representation. We also show how high-order synapses can improve the performance of these networks, and how the location of a packet could be read by motor networks. The multiple packet continuous attractor networks described here may be used for spatial representations in brain areas such as the parietal cortex and hippocampus.