Maximum likelihood competitive learning
Advances in neural information processing systems 2
Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
A multiple cause mixture model for unsupervised learning
Neural Computation
A Neural Network for PCA and Beyond
Neural Processing Letters
Deterministic annealing EM algorithm
Neural Networks
Phase transitions and the perceptual organization of video sequences
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Feature extraction through LOCOCODE
Neural Computation
An Introduction to Variational Methods for Graphical Models
Machine Learning
Preintegration lateral inhibition enhances unsupervised learning
Neural Computation
Latent variable models for neural data analysis
Latent variable models for neural data analysis
Non-negative Matrix Factorization with Sparseness Constraints
The Journal of Machine Learning Research
Hierarchial self-organization of minicolumnar receptive fields
Neural Networks - 2004 Special issue: New developments in self-organizing systems
Learning sensory representations with intrinsic plasticity
Neurocomputing
Learning Image Components for Object Recognition
The Journal of Machine Learning Research
Competition and multiple cause models
Neural Computation
Generalized softmax networks for non-linear component extraction
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A dynamical model for receptive field self-organization in V1 cortical columns
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Dynamics of cortical columns – self-organization of receptive fields
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
Learning of lateral connections for representational invariant recognition
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
LVA/ICA'10 Proceedings of the 9th international conference on Latent variable analysis and signal separation
Expectation Truncation and the Benefits of Preselection In Training Generative Models
The Journal of Machine Learning Research
New measure of boolean factor analysis quality
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part I
The Journal of Machine Learning Research
Hi-index | 0.00 |
We study a generative model in which hidden causes combine competitively to produce observations. Multiple active causes combine to determine the value of an observed variable through a max function, in the place where algorithms such as sparse coding, independent component analysis, or non-negative matrix factorization would use a sum. This max rule can represent a more realistic model of non-linear interaction between basic components in many settings, including acoustic and image data. While exact maximum-likelihood learning of the parameters of this model proves to be intractable, we show that efficient approximations to expectation-maximization (EM) can be found in the case of sparsely active hidden causes. One of these approximations can be formulated as a neural network model with a generalized softmax activation function and Hebbian learning. Thus, we show that learning in recent softmax-like neural networks may be interpreted as approximate maximization of a data likelihood. We use the bars benchmark test to numerically verify our analytical results and to demonstrate the competitiveness of the resulting algorithms. Finally, we show results of learning model parameters to fit acoustic and visual data sets in which max-like component combinations arise naturally.