A multiple cause mixture model for unsupervised learning
Neural Computation
Deterministic annealing EM algorithm
Neural Networks
Feature extraction through LOCOCODE
Neural Computation
Preintegration lateral inhibition enhances unsupervised learning
Neural Computation
Learning Image Components for Object Recognition
The Journal of Machine Learning Research
Competition and multiple cause models
Neural Computation
Maximal Causes for Non-linear Component Extraction
The Journal of Machine Learning Research
LVA/ICA'10 Proceedings of the 9th international conference on Latent variable analysis and signal separation
Expectation Truncation and the Benefits of Preselection In Training Generative Models
The Journal of Machine Learning Research
Hi-index | 0.00 |
We develop a probabilistic interpretation of non-linear component extraction in neural networks that activate their hidden units according to a softmaxlike mechanism. On the basis of a generative model that combines hidden causes using the max-function, we show how the extraction of input components in such networks can be interpreted as maximum likelihood parameter optimization. A simple and neurally plausible Hebbian Δ-rule is derived. For approximatelyoptimal learning, the activity of the hidden neural units is described by a generalized softmax function and the classical softmax is recovered for very sparse input. We use the bars benchmark test to numerically verify our analytical results and to show competitiveness of the derived learning algorithms.