Bayesian Inference for Sparse Generalized Linear Models

  • Authors:
  • Matthias Seeger;Sebastian Gerwinn;Matthias Bethge

  • Affiliations:
  • Max Planck Institute for Biological Cybernetics, Spemannstr. 38, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Spemannstr. 38, Tübingen, Germany;Max Planck Institute for Biological Cybernetics, Spemannstr. 38, Tübingen, Germany

  • Venue:
  • ECML '07 Proceedings of the 18th European conference on Machine Learning
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal spiking data from multiple electrodes, demonstrating significantly superior predictive performance when a sparsity assumption is enforced via a Laplace prior distribution.