Enumerative combinatorics
Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
A Competitive-Layer Model for Feature Binding and Sensory Segmentation
Neural Computation
On the Computational Power of Winner-Take-All
Neural Computation
Neural Computation
Permitted and forbidden sets in symmetric threshold-linear networks
Neural Computation
Complete Convergence of Competitive Neural Networks with Different Time Scales
Neural Processing Letters
Analysis of Cyclic Dynamics for Networks of Linear Threshold Neurons
Neural Computation
Selectivity and Stability via Dendritic Nonlinearity
Neural Computation
Representations of continuous attractors of recurrent neural networks
IEEE Transactions on Neural Networks
Permitted and forbidden sets in discrete-time linear threshold recurrent neural networks
IEEE Transactions on Neural Networks
Foundations of implementing the competitive layer model by Lotka-Volterra recurrent neural networks
IEEE Transactions on Neural Networks
Cost effective localization in distributed sensory networks
Engineering Applications of Artificial Intelligence
A Competitive Layer Model for Cellular Neural Networks
Neural Networks
Encoding binary neural codes in networks of threshold-linear neurons
Neural Computation
Hi-index | 0.00 |
Winner-take-all networks have been proposed to underlie many of the brain's fundamental computational abilities. However, not much is known about how to extend the grouping of potential winners in these networks beyond single neuron or uniformly arranged groups of neurons. We show that competition between arbitrary groups of neurons can be realized by organizing lateral inhibition in linear threshold networks. Given a collection of potentially overlapping groups (with the exception of some degenerate cases), the lateral inhibition results in network dynamics such that any permitted set of neurons that can be coactivated by some input at a stable steady state is contained in one of the groups. The information about the input is preserved in this operation. The activity level of a neuron in a permitted set corresponds to its stimulus strength, amplified by some constant. Sets of neurons that are not part of a group cannot be coactivated by any input at a stable steady state. We analyze the storage capacity of such a network for random groups--the number of random groups the network can store as permitted sets without creating too many spurious ones. In this framework, we calculate the optimal sparsity of the groups (maximizing group entropy). We find that for dense inputs, the optimal sparsity is unphysiologically small. However, when the inputs and the groups are equally sparse, we derive a more plausible optimal sparsity. We believe our results are the first steps toward attractor theories in hybrid analog-digital networks.