Estimating Information Rates with Confidence Intervals in Neural Spike Trains

  • Authors:
  • Jonathon Shlens;Matthew B. Kennel;Henry D. I. Abarbanel;E. J. Chichilnisky

  • Affiliations:
  • Salk Institute, La Jolla, CA 92037, and Institute for Nonlinear Science, University of California, San Diego, La Jolla, CA 92093, U.S.A. shlens@salk.edu;Institute for Nonlinear Science, University of California, San Diego, La Jolla, CA 92093, U.S.A. mkennel@ucsd.edu;Inst. for Nonlinear Sci., Univ. of California and Dept. of Physics and Marine Physical Lab., Scripps Inst. of Oceanography, Univ. of California, San Diego, La Jolla, CA 92093, U.S.A. habarbanel@uc ...;Salk Institute, La Jolla, CA 92037, U.S.A. ej@salk.edu

  • Venue:
  • Neural Computation
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, & Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a discrete, observed time series. We extend this method to measure the rate of novel information that a neural spike train encodes about a stimulus---the average and specific mutual information rates. Our estimator makes few assumptions about the underlying neural dynamics, shows excellent performance in experimentally relevant regimes, and uniquely provides confidence intervals bounding the range of information rates compatible with the observed spike train. We validate this estimator with simulations of spike trains and highlight how stimulus parameters affect its convergence in bias and variance. Finally, we apply these ideas to a recording from a guinea pig retinal ganglion cell and compare results to a simple linear decoder.