Keeping the neural networks simple by minimizing the description length of the weights
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Learning in graphical models
Neural Computation
Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches
Neural Processing Letters
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Variational Learning for Switching State-Space Models
Neural Computation
Blind separation of sources that have spatiotemporal variance dependencies
Signal Processing - Special issue on independent components analysis and beyond
On the Effect of the Form of the Posterior Approximation in Variational Learning of ICA Models
Neural Processing Letters
A unifying model for blind separation of independent sources
Signal Processing
Building Blocks for Variational Bayesian Learning of Latent Variable Models
The Journal of Machine Learning Research
Natural Conjugate Gradient in Variational Inference
Neural Information Processing
Class specific redundancies in natural images: a theory of extrastriate visual processing
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Approximate Riemannian Conjugate Gradient Learning for Fixed-Form Variational Bayes
The Journal of Machine Learning Research
Hi-index | 0.00 |
In many models, variances are assumed to be constant although this assumption is often unrealistic in practice. Joint modelling of means and variances is difficult in many learning approaches, because it can lead into infinite probability densities. We show that a Bayesian variational technique which is sensitive to probability mass instead of density is able to jointly model both variances and means. We consider a model structure where a Gaussian variable, called variance node, controls the variance of another Gaussian variable. Variance nodes make it possible to build hierarchical models for both variances and means. We report experiments with artificial data which demonstrate the ability of the learning algorithm to find variance sources explaining and characterizing well the variances in the multidimensional data. Experiments with biomedical MEG data show that variance sources are present in real-world signals.