Elements of information theory
Elements of information theory
The upward bias in measures of information derived from limited data samples
Neural Computation
An introduction to symbolic dynamics and coding
An introduction to symbolic dynamics and coding
Spikes: exploring the neural code
Spikes: exploring the neural code
Synergy and redundancy among brain cells of behaving monkeys
Proceedings of the 1998 conference on Advances in neural information processing systems II
Stochastic Complexity in Statistical Inquiry Theory
Stochastic Complexity in Statistical Inquiry Theory
Measuring information spatial densities
Neural Computation
Estimation of entropy and mutual information
Neural Computation
Computation in a single Neuron: Hodgkin and Huxley revisited
Neural Computation
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Estimating Entropy Rates with Bayesian Confidence Intervals
Neural Computation
Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity
Neural Computation
Neural Computation
IEEE Transactions on Information Theory
The context-tree weighting method: basic properties
IEEE Transactions on Information Theory
Indices for testing neural codes
Neural Computation
Synergy, redundancy, and multivariate information measures: an experimentalist's perspective
Journal of Computational Neuroscience
Hi-index | 0.00 |
Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, & Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a discrete, observed time series. We extend this method to measure the rate of novel information that a neural spike train encodes about a stimulus---the average and specific mutual information rates. Our estimator makes few assumptions about the underlying neural dynamics, shows excellent performance in experimentally relevant regimes, and uniquely provides confidence intervals bounding the range of information rates compatible with the observed spike train. We validate this estimator with simulations of spike trains and highlight how stimulus parameters affect its convergence in bias and variance. Finally, we apply these ideas to a recording from a guinea pig retinal ganglion cell and compare results to a simple linear decoder.