Improving the mean field approximation via the use of mixture distributions
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
SIAM Journal on Scientific Computing
Neural Computation
Variational methods for inference and estimation in graphical models
Variational methods for inference and estimation in graphical models
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Active learning for logistic regression: an evaluation
Machine Learning
Compressed sensing and Bayesian experimental design
Proceedings of the 25th international conference on Machine learning
Bayesian Inference and Optimal Design for the Sparse Linear Model
The Journal of Machine Learning Research
Network-based sparse Bayesian classification
Pattern Recognition
Large Scale Bayesian Inference and Experimental Design for Sparse Linear Models
SIAM Journal on Imaging Sciences
glm-ie: generalised linear models inference & estimation toolbox
The Journal of Machine Learning Research
Gaussian Kullback-Leibler approximate inference
The Journal of Machine Learning Research
Hi-index | 0.00 |
We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any log-concave model. We provide a generic double loop algorithm for solving this relaxation on models with arbitrary super-Gaussian potentials. By iteratively decoupling the criterion, most of the work can be done by solving large linear systems, rendering our algorithm orders of magnitude faster than previously proposed solvers for the same problem. We evaluate our method on problems of Bayesian active learning for large binary classification models, and show how to address settings with many candidates and sequential inclusion steps.