Fast Variational Sparse Bayesian Learning With Automatic Relevance Determination for Superimposed Signals

  • Authors:
  • Dmitriy Shutin;Thomas Buchgraber;Sanjeev R. Kulkarni;H. Vincent Poor

  • Affiliations:
  • Department of Electrical Engineering, Princeton University, Princeton, NJ, USA;Signal Processing and Speech Communication Laboratory, Graz University of Technology, Graz, Austria;Department of Electrical Engineering, Princeton University, Princeton, NJ, USA;Department of Electrical Engineering, Princeton University, Princeton, NJ, USA

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 2011

Quantified Score

Hi-index 35.68

Visualization

Abstract

In this work, a new fast variational sparse Bayesian learning (SBL) approach with automatic relevance determination (ARD) is proposed. The sparse Bayesian modeling, exemplified by the relevance vector machine (RVM), allows a sparse regression or classification function to be constructed as a linear combination of a few basis functions. It is demonstrated that, by computing the stationary points of the variational update expressions with noninformative (ARD) hyperpriors, a fast version of variational SBL can be constructed. Analysis of the computed stationary points indicates that SBL with Gaussian sparsity priors and noninformative hyperpriors corresponds to removing components with signal-to-noise ratio below a 0 dB threshold; this threshold can also be adjusted to significantly improve the convergence rate and sparsity of SBL. It is demonstrated that the pruning conditions derived for fast variational SBL coincide with those obtained for fast marginal likelihood maximization; moreover, the parameters that maximize the variational lower bound also maximize the marginal likelihood function. The effectiveness of fast variational SBL is demonstrated with synthetic as well as with real data.