Machine Learning
Prediction games and arcing algorithms
Neural Computation
Machine Learning
Sampling from the posterior distribution in generalized linear mixed models
Statistics and Computing
Computational Statistics & Data Analysis - Nonlinear methods and data mining
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Dynamic analysis of neural encoding by point process adaptive filtering
Neural Computation
Boosting as a Regularized Path to a Maximum Margin Classifier
The Journal of Machine Learning Research
A Spike-Train Probability Model
Neural Computation
Generalized structured additive regression based on Bayesian P-splines
Computational Statistics & Data Analysis
Journal of Computational Neuroscience
An overview of bayesian methods for neural spike train analysis
Computational Intelligence and Neuroscience - Special issue on Modeling and Analysis of Neural Spike Trains
Hi-index | 0.00 |
Statistical nonparametric modeling tools that enable the discovery and approximation of functional forms (e.g., tuning functions) relating neural spiking activity to relevant covariates are desirable tools in neuroscience. In this article, we show how stochastic gradient boosting regression can be successfully extended to the modeling of spiking activity data while preserving their point process nature, thus providing a robust nonparametric modeling tool. We formulate stochastic gradient boosting in terms of approximating the conditional intensity function of a point process in discrete time and use the standard likelihood of the process to derive the loss function for the approximation problem. To illustrate the approach, we apply the algorithm to the modeling of primary motor and parietal spiking activity as a function of spiking history and kinematics during a two-dimensional reaching task. Model selection, goodness of fit via the time rescaling theorem, model interpretation via partial dependence plots, ranking of covariates according to their relative importance, and prediction of peri-event time histograms are illustrated and discussed. Additionally, we use the tenfold cross-validated log likelihood of the modeled neural processes (67 cells) to compare the performance of gradient boosting regression to two alternative approaches: standard generalized linear models (GLMs) and Bayesian P-splines with Markov chain Monte Carlo (MCMC) sampling. In our data set, gradient boosting outperformed both Bayesian P-splines (in approximately 90% of the cells) and GLMs (100%). Because of its good performance and computational efficiency, we propose stochastic gradient boosting regression as an off-the-shelf nonparametric tool for initial analyses of large neural data sets (e.g., more than 50 cells; more than 105 samples per cell) with corresponding multidimensional covariate spaces (e.g., more than four covariates). In the cases where a functional form might be amenable to a more compact representation, gradient boosting might also lead to the discovery of simpler, parametric models.