Spikes: exploring the neural code
Spikes: exploring the neural code
Estimating the temporal interval entropy of neuronal discharge
Neural Computation
A Spike-Train Probability Model
Neural Computation
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
k-means++: the advantages of careful seeding
SODA '07 Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms
The context-tree weighting method: basic properties
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Entropy rate quantifies the change of information of a stochastic process (Cover & Thomas, 2006). For decades, the temporal dynamics of spike trains generated by neurons has been studied as a stochastic process (Barbieri, Quirk, Frank, Wilson, & Brown, 2001; Brown, Frank, Tang, Quirk, & Wilson, 1998; Kass & Ventura, 2001; Metzner, Koch, Wessel, & Gabbiani, 1998; Zhang, Ginzburg, McNaughton, & Sejnowski, 1998). We propose here to estimate the entropy rate of a spike train from an inhomogeneous hidden Markov model of the spike intervals. The model is constructed by building a context tree structure to lay out the conditional probabilities of various subsequences of the spike train. For each state in the Markov chain, we assume a gamma distribution over the spike intervals, although any appropriate distribution may be employed as circumstances dictate. The entropy and confidence intervals for the entropy are calculated from bootstrapping samples taken from a large raw data sequence. The estimator was first tested on synthetic data generated by multiple-order Markov chains, and it always converged to the theoretical Shannon entropy rate (except in the case of a sixth-order model, where the calculations were terminated before convergence was reached). We also applied the method to experimental data and compare its performance with that of several other methods of entropy estimation.