Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
EMMCVPR '99 Proceedings of the Second International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition
Learning a multivariate Gaussian mixture model with the reversible jump MCMC algorithm
Statistics and Computing
Statistical Performance Evaluation of Biometric Authentication Systems Using Random Effects Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bayesian density estimation using skew student-t-normal mixtures
Computational Statistics & Data Analysis
Online clustering via finite mixtures of Dirichlet and minimum message length
Engineering Applications of Artificial Intelligence
Density-based Silhouette diagnostics for clustering methods
Statistics and Computing
Modeling phase spectra using gaussian mixture models for human face identification
ICAPR'05 Proceedings of the Third international conference on Pattern Recognition and Image Analysis - Volume Part II
Density estimation using mixtures of mixtures of gaussians
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
Learning from incomplete data via parameterized t mixture models through eigenvalue decomposition
Computational Statistics & Data Analysis
Hi-index | 0.00 |
A new approach to cluster analysis has been introduced based on parsimonious geometric modelling of the within-group covariance matrices in a mixture of multivariate normal distributions, using hierarchical agglomeration and iterative relocation. It works well and is widely used via the MCLUST software available in S-PLUS and StatLib. However, it has several limitations: there is no assessment of the uncertainty about the classification, the partition can be suboptimal, parameter estimates are biased, the shape matrix has to be specified by the user, prior group probabilities are assumed to be equal, the method for choosing the number of groups is based on a crude approximation, and no formal way of choosing between the various possible models is included. Here, we propose a new approach which overcomes all these difficulties. It consists of exact Bayesian inference via Gibbs sampling, and the calculation of Bayes factors (for choosing the model and the number of groups) from the output using the Laplace–Metropolis estimator. It works well in several real and simulated examples.