Selectively grouping neurons in recurrent networks of lateral inhibition

  • Authors:
  • Xiaohui Xie;Richard H. R. Hahnloser;H. Sebastin Seung

  • Affiliations:
  • Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA;Department of Brain and Cognitive Sciences and Howard Hughes Medical Institute, Massachusetts Institute of Technology, Cambridge, MA;Department of Brain and Cognitive Sciences and Howard Hughes Medical Institute, Massachusetts Institute of Technology, Cambridge, MA

  • Venue:
  • Neural Computation
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Winner-take-all networks have been proposed to underlie many of the brain's fundamental computational abilities. However, not much is known about how to extend the grouping of potential winners in these networks beyond single neuron or uniformly arranged groups of neurons. We show that competition between arbitrary groups of neurons can be realized by organizing lateral inhibition in linear threshold networks. Given a collection of potentially overlapping groups (with the exception of some degenerate cases), the lateral inhibition results in network dynamics such that any permitted set of neurons that can be coactivated by some input at a stable steady state is contained in one of the groups. The information about the input is preserved in this operation. The activity level of a neuron in a permitted set corresponds to its stimulus strength, amplified by some constant. Sets of neurons that are not part of a group cannot be coactivated by any input at a stable steady state. We analyze the storage capacity of such a network for random groups--the number of random groups the network can store as permitted sets without creating too many spurious ones. In this framework, we calculate the optimal sparsity of the groups (maximizing group entropy). We find that for dense inputs, the optimal sparsity is unphysiologically small. However, when the inputs and the groups are equally sparse, we derive a more plausible optimal sparsity. We believe our results are the first steps toward attractor theories in hybrid analog-digital networks.