Matrix computations (3rd ed.)
SIAM Journal on Scientific Computing
Neural Computation
Variational methods for inference and estimation in graphical models
Variational methods for inference and estimation in graphical models
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Convex Optimization
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Algorithms for simultaneous sparse approximation: part II: Convex relaxation
Signal Processing - Sparse approximations in signal and image processing
Bayesian compressive sensing and projection optimization
Proceedings of the 24th international conference on Machine learning
Compressed sensing and Bayesian experimental design
Proceedings of the 25th international conference on Machine learning
Bayesian Inference and Optimal Design for the Sparse Linear Model
The Journal of Machine Learning Research
SIAM Journal on Matrix Analysis and Applications
Graphical Models, Exponential Families, and Variational Inference
Foundations and Trends® in Machine Learning
Convex variational Bayesian inference for large scale generalized linear models
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Sparse reconstruction by separable approximation
IEEE Transactions on Signal Processing
The Split Bregman Method for L1-Regularized Problems
SIAM Journal on Imaging Sciences
Subset selection in noise based on diversity measure minimization
IEEE Transactions on Signal Processing
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration
IEEE Transactions on Image Processing
Latent Variable Bayesian Models for Promoting Sparsity
IEEE Transactions on Information Theory
glm-ie: generalised linear models inference & estimation toolbox
The Journal of Machine Learning Research
Gaussian Kullback-Leibler approximate inference
The Journal of Machine Learning Research
Hi-index | 0.00 |
Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or superresolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density's mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex if and only if posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images. Parts of this work have been presented at conferences [M. Seeger, H. Nickisch, R. Pohmann, and B. Schölkopf, in Advances in Neural Information Processing Systems 21, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, eds., Curran Associates, Red Hook, NY, 2009, pp. 1441-1448; H. Nickisch and M. Seeger, in Proceedings of the 26th International Conference on Machine Learning, L. Bottou and M. Littman, eds., Omni Press, Madison, WI, 2009, pp. 761-768].