Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Convex Optimization
On compressive sensing applied to radar
Signal Processing
Dense error correction via l1-minimization
IEEE Transactions on Information Theory
Bayesian compressive sensing for cluster structured sparse signals
Signal Processing
Compressed sensing of complex-valued data
Signal Processing
IEEE Transactions on Signal Processing
Sparse Bayesian learning for basis selection
IEEE Transactions on Signal Processing
SPICE: A Sparse Covariance-Based Estimation Method for Array Processing
IEEE Transactions on Signal Processing
Fast Solution of -Norm Minimization Problems When the Solution May Be Sparse
IEEE Transactions on Information Theory
Maximum-Likelihood Nonparametric Estimation of Smooth Spectra From Irregularly Sampled Data
IEEE Transactions on Signal Processing
Sparse Estimation of Spectral Lines: Grid Selection Problems and Their Solutions
IEEE Transactions on Signal Processing
Hi-index | 0.08 |
SPICE (SParse Iterative Covariance-based Estimation) is a recently introduced method for sparse-parameter estimation in linear models using a robust covariance fitting criterion that does not depend on any hyperparameters. In this paper we revisit the derivation of SPICE to streamline it and to provide further insights into this method. LIKES (LIKelihood-based Estimation of Sparse parameters) is a new method obtained in a hyperparameter-free manner from the maximum-likelihood principle applied to the same estimation problem as considered by SPICE. Both SPICE and LIKES are shown to provide accurate parameter estimates even from scarce data samples, with LIKES being more accurate than SPICE at the cost of an increased computational burden.