Multivariate statistics: a practical approach
Multivariate statistics: a practical approach
Deterministic annealing EM algorithm
Neural Networks
Mixtures of probabilistic principal component analyzers
Neural Computation
Robust mixture modelling using the t distribution
Statistics and Computing
Assessing a Mixture Model for Clustering with the Integrated Completed Likelihood
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Extension of the mixture of factor analyzers model to incorporate the multivariate t-distribution
Computational Statistics & Data Analysis
Parsimonious Gaussian mixture models
Statistics and Computing
Model-based clustering with non-elliptically contoured distributions
Statistics and Computing
Constrained monotone EM algorithms for mixtures of multivariate t distributions
Statistics and Computing
Multivariate Skew t Mixture Models: Applications to Fluorescence-Activated Cell Sorting Data
DICTA '09 Proceedings of the 2009 Digital Image Computing: Techniques and Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
Model-based classification via mixtures of multivariate t-distributions
Computational Statistics & Data Analysis
Extending mixtures of multivariate t-factor analyzers
Statistics and Computing
Multivariate mixture modeling using skew-normal independent distributions
Computational Statistics & Data Analysis
Editorial: The 2nd special issue on advances in mixture models
Computational Statistics & Data Analysis
Hi-index | 0.03 |
Robust mixture modeling approaches using skewed distributions have recently been explored to accommodate asymmetric data. Parsimonious skew-t and skew-normal analogues of the GPCM family that employ an eigenvalue decomposition of a scale matrix are introduced. The methods are compared to existing models in both unsupervised and semi-supervised classification frameworks. Parameter estimation is carried out using the expectation-maximization algorithm and models are selected using the Bayesian information criterion. The efficacy of these extensions is illustrated on simulated and real data sets.