Natural gradient works efficiently in learning
Neural Computation
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Neural Computation
Comparison of approximate methods for handling hyperparameters
Neural Computation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Variational mixture of Bayesian independent component analyzers
Neural Computation
Variational methods for inference and estimation in graphical models
Variational methods for inference and estimation in graphical models
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Variational learning of clusters of undercomplete nonsymmetric independent components
The Journal of Machine Learning Research
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Mean field theory for sigmoid belief networks
Journal of Artificial Intelligence Research
Probabilistic Formulation of Independent Vector Analysis Using Complex Gaussian Scale Mixtures
ICA '09 Proceedings of the 8th International Conference on Independent Component Analysis and Signal Separation
Modeling and estimation of dependent subspaces with non-radially symmetric and skewed densities
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Video-based hand manipulation capture through composite motion control
ACM Transactions on Graphics (TOG) - SIGGRAPH 2013 Conference Proceedings
Hi-index | 0.00 |
We propose an extension of the mixture of factor (or independent component) analyzers model to include strongly super-gaussian mixture source densities. This allows greater economy in representation of densities with (multiple) peaked modes or heavy tails than using several Gaussians to represent these features. We derive an EM algorithm to find the maximum likelihood estimate of the model, and show that it converges globally to a local optimum of the actual non-gaussian mixture model without needing any approximations. This extends considerably the class of source densities that can be used in exact estimation, and shows that in a sense super-gaussian densities are as natural as Gaussian densities. We also derive an adaptive Generalized Gaussian algorithm that learns the shape parameters of Generalized Gaussian mixture components. Experiments verify the validity of the algorithm.