IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the 1998 conference on Advances in neural information processing systems II
Probabilistic visualisation of high-dimensional binary data
Proceedings of the 1998 conference on Advances in neural information processing systems II
A divisive information theoretic feature clustering algorithm for text classification
The Journal of Machine Learning Research
Clustering with Bregman Divergences
The Journal of Machine Learning Research
Adaptive dimension reduction using discriminant analysis and K-means clustering
Proceedings of the 24th international conference on Machine learning
IEICE - Transactions on Information and Systems
Prior hyperparameters in Bayesian PCA
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Inferring parameters and structure of latent variable models by variational bayes
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Entropy-based variational scheme for fast bayes learning of Gaussian mixtures
SSPR&SPR'10 Proceedings of the 2010 joint IAPR international conference on Structural, syntactic, and statistical pattern recognition
Learning finite Beta-Liouville mixture models via variational bayes for proportional data clustering
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Exponential principal component analysis (e-PCA) has been proposed to reduce the dimension of the parameters of probability distributions using Kullback information as a distance between two distributions. It also provides a framework for dealing with various data types such as binary and integer for which the Gaussian assumption on the data distribution is inappropriate. In this paper, we introduce a latent variable model for the e-PCA. Assuming the discrete distribution on the latent variable leads to mixture models with constraint on their parameters. This provides a framework for clustering on the lower dimensional subspace of exponential family distributions. We derive a learning algorithm for those mixture models based on the variational Bayes (VB) method. Although intractable integration is required to implement the algorithm for a subspace, an approximation technique using Laplace's method allows us to carry out clustering on an arbitrary subspace. Combined with the estimation of the subspace, the resulting algorithm performs simultaneous dimensionality reduction and clustering. Numerical experiments on synthetic and real data demonstrate its effectiveness for extracting the structures of data as a visualization technique and its high generalization ability as a density estimation model.