Spikes: exploring the neural code
Spikes: exploring the neural code
Expectation Propagation for approximate Bayesian inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Predictive automatic relevance determination by expectation propagation
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Gaussian Processes for Classification: Mean-Field Algorithms
Neural Computation
Bayesian Inference and Optimal Design for the Sparse Linear Model
The Journal of Machine Learning Research
A hierarchical Bayesian model for frame representation
IEEE Transactions on Signal Processing
Gaussian Kullback-Leibler approximate inference
The Journal of Machine Learning Research
Hi-index | 0.00 |
We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal spiking data from multiple electrodes, demonstrating significantly superior predictive performance when a sparsity assumption is enforced via a Laplace prior distribution.