GroupLens: applying collaborative filtering to Usenet news
Communications of the ACM
Fast maximum margin matrix factorization for collaborative prediction
ICML '05 Proceedings of the 22nd international conference on Machine learning
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
An accelerated gradient method for trace norm minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Tensor Decompositions and Applications
SIAM Review
Algebraic Geometry and Statistical Learning Theory
Algebraic Geometry and Statistical Learning Theory
Practical Approaches to Principal Component Analysis in the Presence of Missing Values
The Journal of Machine Learning Research
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
The Journal of Machine Learning Research
A Singular Value Thresholding Algorithm for Matrix Completion
SIAM Journal on Optimization
Inferring parameters and structure of latent variable models by variational bayes
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Theoretical Analysis of Bayesian Matrix Factorization
The Journal of Machine Learning Research
Overview and recent advances in partial least squares
SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection
Learning in linear neural networks: a survey
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The variational Bayesian (VB) approximation is known to be a promising approach to Bayesian estimation, when the rigorous calculation of the Bayes posterior is intractable. The VB approximation has been successfully applied to matrix factorization (MF), offering automatic dimensionality selection for principal component analysis. Generally, finding the VB solution is a nonconvex problem, and most methods rely on a local search algorithm derived through a standard procedure for the VB approximation. In this paper, we show that a better option is available for fully-observed VBMF--the global solution can be analytically computed. More specifically, the global solution is a reweighted SVD of the observed matrix, and each weight can be obtained by solving a quartic equation with its coefficients being functions of the observed singular value. We further show that the global optimal solution of empirical VBMF (where hyperparameters are also learned from data) can also be analytically computed. We illustrate the usefulness of our results through experiments in multi-variate analysis.