Bayesian forecasting and dynamic models (2nd ed.)
Bayesian forecasting and dynamic models (2nd ed.)
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
A unifying review of linear Gaussian models
Neural Computation
Spikes: exploring the neural code
Spikes: exploring the neural code
Numerical Recipes in C: The Art of Scientific Computing
Numerical Recipes in C: The Art of Scientific Computing
Estimating a state-space model from point process observations
Neural Computation
A family of algorithms for approximate bayesian inference
A family of algorithms for approximate bayesian inference
Convex Optimization
Dynamic analysis of neural encoding by point process adaptive filtering
Neural Computation
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Monte Carlo Statistical Methods (Springer Texts in Statistics)
Time Series Analysis and Its Applications (Springer Texts in Statistics)
Time Series Analysis and Its Applications (Springer Texts in Statistics)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
State-Space Models: From the EM Algorithm to a Gradient Approach
Neural Computation
Fast Gaussian process methods for point process intensity estimation
Proceedings of the 25th international conference on Machine learning
Sequential optimal design of neurophysiology experiments
Neural Computation
Journal of Computational Neuroscience
Efficient markov chain monte carlo methods for decoding neural spike trains
Neural Computation
Block matrices with L-block-banded inverse: inversion algorithms
IEEE Transactions on Signal Processing
Applying the multivariate time-rescaling theorem to neural population models
Neural Computation
Estimation of time-dependent input from neuronal membrane potential
Neural Computation
Optimal experimental design for sampling voltage on dendritic trees in the low-SNR regime
Journal of Computational Neuroscience
Likelihood methods for point processes with refractoriness
Neural Computation
Fast inference in generalized linear models via expected log-likelihoods
Journal of Computational Neuroscience
An overview of bayesian methods for neural spike train analysis
Computational Intelligence and Neuroscience - Special issue on Modeling and Analysis of Neural Spike Trains
Hi-index | 0.00 |
State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.