Deterministic annealing EM algorithm
Neural Networks
Pattern Analysis & Applications
A fast learning algorithm for deep belief nets
Neural Computation
Sparse coding via thresholding and local competition in neural circuits
Neural Computation
Online Learning for Matrix Factorization and Sparse Coding
The Journal of Machine Learning Research
LVA/ICA'10 Proceedings of the 9th international conference on Latent variable analysis and signal separation
Expectation Truncation and the Benefits of Preselection In Training Generative Models
The Journal of Machine Learning Research
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
We study a novel sparse coding model with discrete and symmetric prior distribution. Instead of using continuous latent variables distributed according to heavy tail distributions, the latent variables of our approach are discrete. In contrast to approaches using binary latents, we use latents with three states (-1, 0, and 1) following a symmetric and zero-mean distribution. While using discrete latents, the model thus maintains important properties of standard sparse coding models and of its recent variants. To efficiently train the parameters of our probabilistic generative model, we apply a truncated variational EM approach (Expectation Truncation). The resulting learning algorithm infers all model parameters including the variance of data noise and data sparsity. In numerical experiments on artificial data, we show that the algorithm efficiently recovers the generating parameters, and we find that the applied variational approach helps in avoiding local optima. Using experiments on natural image patches, we demonstrate large-scale applicability of the approach and study the obtained Gabor-like basis functions.