A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
An Introduction to Variational Methods for Graphical Models
Machine Learning
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Bayesian Inference and Optimal Design for the Sparse Linear Model
The Journal of Machine Learning Research
Online dictionary learning for sparse coding
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Nonparametric factor analysis with beta process priors
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Infinite sparse factor analysis and infinite independent components analysis
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Least-squares independent component analysis
Neural Computation
Hi-index | 0.00 |
We define and discuss a novel sparse coding algorithm based on closed-form EM updates and continuous latent variables. The underlying generative model consists of a standard ‘spike-and-slab' prior and a Gaussian noise model. Closed-form solutions for E- and M-step equations are derived by generalizing probabilistic PCA. The resulting EM algorithm can take all modes of a potentially multimodal posterior into account. The computational cost of the algorithm scales exponentially with the number of hidden dimensions. However, with current computational resources, it is still possible to efficiently learn model parameters for medium-scale problems. Thus, the algorithm can be applied to the typical range of source separation tasks. In numerical experiments on artificial data we verify likelihood maximization and show that the derived algorithm recovers the sparse directions of standard sparse coding distributions. On source separation benchmarks comprised of realistic data we show that the algorithm is competitive with other recent methods.