Latent variable models and factors analysis
Latent variable models and factors analysis
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Learning in graphical models
Neural Computation
A constrained EM algorithm for principal component analysis
Neural Computation
Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches
Neural Processing Letters
Robust probabilistic projections
ICML '06 Proceedings of the 23rd international conference on Machine learning
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models
The Journal of Machine Learning Research
Bayesian probabilistic matrix factorization using Markov chain Monte Carlo
Proceedings of the 25th international conference on Machine learning
Practical Approaches to Principal Component Analysis in the Presence of Missing Values
The Journal of Machine Learning Research
Bayesian Robust PCA of Incomplete Data
Neural Processing Letters
Bayesian Canonical correlation analysis
The Journal of Machine Learning Research
Hi-index | 0.01 |
We propose simple transformation of the hidden states in variational Bayesian factor analysis models to speed up the learning procedure. The speed-up is achieved by using proper parameterization of the posterior approximation which allows joint optimization of its individual factors, thus the transformation is theoretically justified. We derive the transformation formulae for variational Bayesian factor analysis and show experimentally that it can significantly improve the rate of convergence. The proposed transformation basically performs centering and whitening of the hidden factors taking into account the posterior uncertainties. Similar transformations can be applied to other variational Bayesian factor analysis models as well.