A unifying review of linear Gaussian models
Neural Computation
Numerical Recipes in C: The Art of Scientific Computing
Numerical Recipes in C: The Art of Scientific Computing
Expectation Propagation for approximate Bayesian inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Estimating a state-space model from point process observations
Neural Computation
Convex Optimization
Dynamic analysis of neural encoding by point process adaptive filtering
Neural Computation
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
State-Space Models: From the EM Algorithm to a Gradient Approach
Neural Computation
Efficient markov chain monte carlo methods for decoding neural spike trains
Neural Computation
Expectation propagation for approximate inference in dynamic bayesian networks
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Block matrices with L-block-banded inverse: inversion algorithms
IEEE Transactions on Signal Processing
A new look at state-space models for neural data
Journal of Computational Neuroscience
Applying the multivariate time-rescaling theorem to neural population models
Neural Computation
Hi-index | 0.01 |
A number of important data analysis problems in neuroscience can be solved using state-space models. In this article, we describe fast methods for computing the exact maximum a posteriori (MAP) path of the hidden state variable in these models, given spike train observations. If the state transition density is log-concave and the observation model satisfies certain standard assumptions, then the optimization problem is strictly concave and can be solved rapidly with Newton---Raphson methods, because the Hessian of the loglikelihood is block tridiagonal. We can further exploit this block-tridiagonal structure to develop efficient parameter estimation methods for these models. We describe applications of this approach to neural decoding problems, with a focus on the classic integrate-and-fire model as a key example.