Time series: theory and methods
Time series: theory and methods
Elements of information theory
Elements of information theory
Numerical Recipes in C: The Art of Scientific Computing
Numerical Recipes in C: The Art of Scientific Computing
Time Series Analysis and Its Applications (Springer Texts in Statistics)
Time Series Analysis and Its Applications (Springer Texts in Statistics)
Algorithms for subset selection in linear regression
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
The Journal of Machine Learning Research
Sequential optimal design of neurophysiology experiments
Neural Computation
SFO: A Toolbox for Submodular Function Optimization
The Journal of Machine Learning Research
A new look at state-space models for neural data
Journal of Computational Neuroscience
Hi-index | 0.00 |
Due to the limitations of current voltage sensing techniques, optimal filtering of noisy, undersampled voltage signals on dendritic trees is a key problem in computational cellular neuroscience. These limitations lead to voltage data that is incomplete (in the sense of only capturing a small portion of the full spatiotemporal signal) and often highly noisy. In this paper we use a Kalman filtering framework to develop optimal experimental design methods for voltage sampling. Our approach is to use a simple greedy algorithm with lazy evaluation to minimize the expected square error of the estimated spatiotemporal voltage signal. We take advantage of some particular features of the dendritic filtering problem to efficiently calculate the Kalman estimator's covariance. We test our framework with simulations of real dendritic branching structures and compare the quality of both time-invariant and time-varying sampling schemes. While the benefit of using the experimental design methods was modest in the time-invariant case, improvements of 25---100% over more naïve methods were found when the observation locations were allowed to change with time. We also present a heuristic approximation to the greedy algorithm that is an order of magnitude faster while still providing comparable results.