Multi-task compressive sensing with Dirichlet process priors
Proceedings of the 25th international conference on Machine learning
Review of user parameter-free robust adaptive beamforming algorithms
Digital Signal Processing
IEEE Transactions on Signal Processing
Compressive-projection principal component analysis
IEEE Transactions on Image Processing
Computationally efficient sparse Bayesian learning via belief propagation
IEEE Transactions on Signal Processing
SIAM Journal on Scientific Computing
Efficient Sensing Topology Management for Spatial Monitoring with Sensor Networks
Journal of Signal Processing Systems
Learning with Structured Sparsity
The Journal of Machine Learning Research
Sparse representations and sphere decoding for array signal processing
Digital Signal Processing
Sparse regression learning by aggregation and Langevin Monte-Carlo
Journal of Computer and System Sciences
Hi-index | 0.01 |
Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent L>1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches.