Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
A multiple cause mixture model for unsupervised learning
Neural Computation
Deterministic annealing EM algorithm
Neural Networks
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Feature extraction through LOCOCODE
Neural Computation
A model of computation in neocortical architecture
Neural Networks - Special issue on organisation of computation in brain-like systems
An Introduction to Variational Methods for Graphical Models
Machine Learning
Selecting the k largest elements with parity tests
Discrete Applied Mathematics
Expectation Propagation for approximate Bayesian inference
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Latent variable models for neural data analysis
Latent variable models for neural data analysis
Energy-based models for sparse overcomplete representations
The Journal of Machine Learning Research
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Hierarchial self-organization of minicolumnar receptive fields
Neural Networks - 2004 Special issue: New developments in self-organizing systems
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Learning sensory representations with intrinsic plasticity
Neurocomputing
Learning Image Components for Object Recognition
The Journal of Machine Learning Research
Competition and multiple cause models
Neural Computation
Maximal Causes for Non-linear Component Extraction
The Journal of Machine Learning Research
Generalized softmax networks for non-linear component extraction
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A two-layer ICA-like model estimated by score matching
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
LVA/ICA'10 Proceedings of the 9th international conference on Latent variable analysis and signal separation
LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation
Hi-index | 0.00 |
We show how a preselection of hidden variables can be used to efficiently train generative models with binary hidden variables. The approach is based on Expectation Maximization (EM) and uses an efficiently computable approximation to the sufficient statistics of a given model. The computational cost to compute the sufficient statistics is strongly reduced by selecting, for each data point, the relevant hidden causes. The approximation is applicable to a wide range of generative models and provides an interpretation of the benefits of preselection in terms of a variational EM approximation. To empirically show that the method maximizes the data likelihood, it is applied to different types of generative models including: a version of non-negative matrix factorization (NMF), a model for non-linear component extraction (MCA), and a linear generative model similar to sparse coding. The derived algorithms are applied to both artificial and realistic data, and are compared to other models in the literature. We find that the training scheme can reduce computational costs by orders of magnitude and allows for a reliable extraction of hidden causes.