Self-organizing maps
Factorial Hidden Markov Models
Machine Learning - Special issue on learning with probabilistic representations
EM algorithms for PCA and SPCA
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Mixtures of probabilistic principal component analyzers
Neural Computation
Neural Computation
Bayesian Analysis of Mixtures of Factor Analyzers
Neural Computation
Learning Overcomplete Representations
Neural Computation
Modeling the manifolds of images of handwritten digits
IEEE Transactions on Neural Networks
Nonlinear Boosting Projections for Ensemble Construction
The Journal of Machine Learning Research
Hi-index | 0.00 |
In this paper the ensemble of independent factor analyzers (EIFA) is proposed. This new statistical model assumes that each data point is generated by the sum of outputs of independently activated factor analyzers. A maximum likelihood (ML) estimation algorithm for the parameter is derived using a Monte Carlo EM algorithm with a Gibbs sampler. The EIFA model is applied to natural image data. With the progress of the learning, the independent factor analyzers develop into feature detectors that resemble complex cells in mammalian visual systems. Although this result is similar to the previous one obtained by independent subspace analysis, we observe the emergence of complex cells from natural images in a more general framework of models, including overcomplete models allowing additive noise in the observables.