L1 LASSO Modeling and Its Bayesian Inference
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Bayesian Robust PCA for Incomplete Data
ICA '09 Proceedings of the 8th International Conference on Independent Component Analysis and Signal Separation
Sparse Kernel Learning and the Relevance Units Machine
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Mixture of the robust L1 distributions and its applications
AI'07 Proceedings of the 20th Australian joint conference on Advances in artificial intelligence
Relevance units latent variable model and nonlinear dimensionality reduction
IEEE Transactions on Neural Networks
Variational Bayesian mixture of robust CCA models
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
Distance metric learning by minimal distance maximization
Pattern Recognition
A variational Bayesian method to inverse problems with impulsive noise
Journal of Computational Physics
Maxi-Min discriminant analysis via online learning
Neural Networks
Bayesian Robust PCA of Incomplete Data
Neural Processing Letters
A pure L1-norm principal component analysis
Computational Statistics & Data Analysis
Journal of Computational Physics
The Journal of Machine Learning Research
Hi-index | 0.01 |
We introduce a robust probabilistic L1-PCA model in which the conventional gaussian distribution for the noise in the observed data was replaced by the Laplacian distribution (or L1 distribution). Due to the heavy tail characteristics of the L1 distribution, the proposed model is supposed to be more robust against data outliers. In this letter, we demonstrate how a variational approximation scheme enables effective inference of key parameters in the probabilistic L1-PCA model. As the L1 density can be expanded as a superposition of infinite number of gaussian densities, we express the L1-PCA model as a marginalized model over the superpositions. By doing so, a tractable Bayesian inference can be achieved based on the variational expectation-maximization-type algorithm.