Dynamics and computation of continuous attractors

  • Authors:
  • Si Wu;Kosuke Hamaguchi;Shun-ichi Amari

  • Affiliations:
  • Department of Informatics, University of Sussex, Brighton BN1 9QH, U.K. siwusussex.ac.uk;Amari Research Unit, RIKEN Brain Science Institute, Saitama 351-0198, Japan. kosuke.hamaguchiuniv-paris5.fr;Amari Research Unit, RIKEN Brain Science Institute, Saitama 351-0198, Japan. amaribrain.riken.go.jp

  • Venue:
  • Neural Computation
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Continuous attractor is a promising model for describing the encoding of continuous stimuli in neural systems. In a continuous attractor, the stationary states of the neural system form a continuous parameter space, on which the system is neutrally stable. This property enables the neutral system to track time-varying stimuli smoothly, but it also degrades the accuracy of information retrieval, since these stationary states are easily disturbed by external noise. In this work, based on a simple model, we systematically investigate the dynamics and the computational properties of continuous attractors. In order to analyze the dynamics of a large-size network, which is otherwise extremely complicated, we develop a strategy to reduce its dimensionality by utilizing the fact that a continuous attractor can eliminate the noise components perpendicular to the attractor space very quickly. We therefore project the network dynamics onto the tangent of the attractor space and simplify it successfully as a one-dimensional Ornstein-Uhlenbeck process. Based on this simplified model, we investigate (1) the decoding error of a continuous attractor under the driving of external noisy inputs, (2) the tracking speed of a continuous attractor when external stimulus experiences abrupt changes, (3) the neural correlation structure associated with the specific dynamics of a continuous attractor, and (4) the consequence of asymmetric neural correlation on statistical population decoding. The potential implications of these results on our understanding of neural information processing are also discussed.