Atomic Decomposition by Basis Pursuit
SIAM Review
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Learning Overcomplete Representations
Neural Computation
Bayesian blind separation of generalized hyperbolic processes in noisy and underdeterminate mixtures
IEEE Transactions on Signal Processing
Sparse Bayesian Regression for Grouped Variables in Generalized Linear Models
Proceedings of the 31st DAGM Symposium on Pattern Recognition
Optimization with Sparsity-Inducing Penalties
Foundations and Trends® in Machine Learning
Probit classifiers with a generalized Gaussian scale mixture prior
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
EP-GIG priors and applications in bayesian sparse learning
The Journal of Machine Learning Research
Probabilistic classifiers with a generalized Gaussian scale mixture prior
Pattern Recognition
Hi-index | 0.00 |
One of the most common problems in machine learning and statistics consists of estimating the mean response Xβ from a vector of observations y assuming y = Xβ + ε where X is known, β is a vector of parameters of interest and ε a vector of stochastic errors. We are particularly interested here in the case where the dimension K of β is much higher than the dimension of y. We propose some flexible Bayesian models which can yield sparse estimates of β. We show that as K → ∞ these models are closely related to a class of Lévy processes. Simulations demonstrate that our models outperform significantly a range of popular alternatives.