Elements of information theory
Elements of information theory
Information-based objective functions for active data selection
Neural Computation
A Stable and Efficient Algorithm for the Rank-One Modification of the Symmetric Eigenproblem
SIAM Journal on Matrix Analysis and Applications
Applied numerical linear algebra
Applied numerical linear algebra
Expectation Propagation for approximate Bayesian inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Asymptotic Theory of Information-Theoretic Experimental Design
Neural Computation
Active learning for logistic regression
Active learning for logistic regression
Active learning with statistical models
Journal of Artificial Intelligence Research
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
A new look at state-space models for neural data
Journal of Computational Neuroscience
Automating the design of informative sequences of sensory stimuli
Journal of Computational Neuroscience
Applying the multivariate time-rescaling theorem to neural population models
Neural Computation
Optimal experimental design for sampling voltage on dendritic trees in the low-SNR regime
Journal of Computational Neuroscience
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
Sequential Monte Carlo for Bayesian sequentially designed experiments for discrete data
Computational Statistics & Data Analysis
Efficiently learning the preferences of people
Machine Learning
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach
Management Science
Fast inference in generalized linear models via expected log-likelihoods
Journal of Computational Neuroscience
Hi-index | 0.00 |
Adaptively optimizing experiments has the potential to significantly reduce the number of trials needed to build parametric statistical models of neural systems. However, application of adaptive methods to neurophysiology has been limited by severe computational challenges. Since most neurons are high-dimensional systems, optimizing neurophysiology experiments requires computing high-dimensional integrations and optimizations in real time. Here we present a fast algorithm for choosing the most informative stimulus by maximizing the mutual information between the data and the unknown parameters of a generalized linear model (GLM) that we want to fit to the neuron's activity. We rely on important log concavity and asymptotic normality properties of the posterior to facilitate the required computations. Our algorithm requires only low-rank matrix manipulations and a two-dimensional search to choose the optimal stimulus. The average running time of these operations scales quadratically with the dimensionality of the GLM, making real-time adaptive experimental design feasible even for high-dimensional stimulus and parameter spaces. For example, we require roughly 10 milliseconds on a desktop computer to optimize a 100-dimensional stimulus. Despite using some approximations to make the algorithm efficient, our algorithm asymptotically decreases the uncertainty about the model parameters at a rate equal to the maximum rate predicted by an asymptotic analysis. Simulation results show that picking stimuli by maximizing the mutual information can speed up convergence to the optimal values of the parameters by an order of magnitude compared to using random (nonadaptive) stimuli. Finally, applying our design procedure to real neurophysiology experiments requires addressing the nonstationarities that we would expect to see in neural responses; our algorithm can efficiently handle both fast adaptation due to spike history effects and slow, nonsystematic drifts in a neuron's activity.