Elements of information theory
Elements of information theory
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Reduction of conductance-based neuron models
Biological Cybernetics
Spikes: exploring the neural code
Spikes: exploring the neural code
Logical/Linear Operators for Image Curves
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic Capacity Tuning of Very Large VC-Dimension Classifiers
Advances in Neural Information Processing Systems 5, [NIPS Conference]
What causes a Neuron to spike?
Neural Computation
Neural Computation
Biophysics of Computation: Information Processing in Single Neurons (Computational Neuroscience Series)
What causes a Neuron to spike?
Neural Computation
Single neuron computation: From dynamical system to feature detector
Neural Computation
Bayesian spiking neurons i: Inference
Neural Computation
Features of Hodgkin-Huxley Neuron Response to Periodic Spike-Train Inputs
ISNN '09 Proceedings of the 6th International Symposium on Neural Networks on Advances in Neural Networks
Feature selection in simple neurons: How coding depends on spiking dynamics
Neural Computation
Population encoding with Hodgkin-Huxley neurons
IEEE Transactions on Information Theory - Special issue on information theory in molecular biology and neuroscience
Journal of Computational Neuroscience
Learning quadratic receptive fields from neural responses to natural stimuli
Neural Computation
Hi-index | 0.00 |
A spiking neuron "computes" by transforming a complex dynamical input into a train of action potentials, or spikes. The computation performed by the neuron can be formulated as dimensional reduction, or feature detection, followed by a nonlinear decision function over the low-dimensional space. Generalizations of the reverse correlation technique with white noise input provide a numerical strategy for extracting the relevant low-dimensional features from experimental data, and information theory can be used to evaluate the quality of the low-dimensional approximation. We apply these methods to analyze the simplest biophysically realistic model neuron, the Hodgkin-Huxley (HH) model, using this system to illustrate the general methodological issues. We focus on the features in the stimulus that trigger a spike, explicitly eliminating the effects of interactions between spikes. One can approximate this triggering "feature space" as a two-dimensional linear subspace in the high-dimensional space of input histories, capturing in this way a substantial fraction of the mutual information between inputs and spike time. We find that an even better approximation, however, is to describe the relevant subspace as two dimensional but curved; in this way, we can capture 90% of the mutual information even at high time resolution. Our analysis provides a new understanding of the computational properties of the HH model. While it is common to approximate neural behavior as "integrate and fire," the HH model is not an integrator nor is it well described by a single threshold.