What size net gives valid generalization?
Neural Computation
Multiplying with synapses and neurons
Single neuron computation
NMDA-based pattern discrimination in a modeled cortical neuron
Neural Computation
VC dimension and uniform learnability of sparse polynomials and rational functions
SIAM Journal on Computing
Pattern classification: a unified view of statistical and neural approaches
Pattern classification: a unified view of statistical and neural approaches
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Automatic Capacity Tuning of Very Large VC-Dimension Classifiers
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Vc dimension of an integrate-and-fire neuron model
Neural Computation
Neural Computation
Hi-index | 0.00 |
Biophysical modeling studies have suggested that neurons with active dendrites can be viewed as linear units augmented by product terms that arise from interactions between synaptic inputs within the same dendritic subregions. However, the degree to which local nonlinear synaptic interactions could augment the memory capacity of a neuron is not known in a quantitative sense. To approach this question, we have studied the family of subsampled quadratic classifiers: linear classifiers augmented by the best k terms from the set of K = (d2 + d)/2 second-order product terms available in d dimensions. We developed an expression for the total parameter entropy, whose form shows that the capacity of an SQ classifier does not reside solely in its conventional weight values - the explicit memory used to store constant, linear, and higher-order coefficients. Rather, we identify a second type of parameter flexibility that jointly contributes to an SQ classifier's capacity: the choice as to which product terms are included in the model and which are not. We validate the form of the entropy expression using empirical studies of relative capacity within families of geometrically isomorphic SQ classifiers. Our results have direct implications for neurobiological (and other hardware) learning systems, where in the limit of high-dimensional input spaces and low-resolution synaptic weight values, this relatively little explored form of choice flexibility could constitute a major source of trainable model capacity.