Algorithms for Model-Based Gaussian Hierarchical Clustering
SIAM Journal on Scientific Computing
Computational Statistics & Data Analysis
Choosing initial values for the EM algorithm for finite mixtures
Computational Statistics & Data Analysis
Robust mixture modelling using multivariate t-distribution with missing information
Pattern Recognition Letters
Variational approximations in Bayesian model selection for finite mixture distributions
Computational Statistics & Data Analysis
Maximum likelihood estimation for multivariate skew normal mixture models
Journal of Multivariate Analysis
Initializing Partition-Optimization Algorithms
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Two-way Poisson mixture models for simultaneous document classification and word clustering
Computational Statistics & Data Analysis
Model-based classification via mixtures of multivariate t-distributions
Computational Statistics & Data Analysis
CARP: Software for Fishing Out Good Clustering Algorithms
The Journal of Machine Learning Research
Efficient background subtraction under abrupt illumination variations
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
Cox proportional hazards models with frailty for negatively correlated employment processes
Computational Statistics & Data Analysis
Using conditional independence for parsimonious model-based Gaussian clustering
Statistics and Computing
Model-based clustering of high-dimensional data: A review
Computational Statistics & Data Analysis
A multivariate linear regression analysis using finite mixtures of t distributions
Computational Statistics & Data Analysis
Hi-index | 0.03 |
An approach is proposed for initializing the expectation-maximization (EM) algorithm in multivariate Gaussian mixture models with an unknown number of components. As the EM algorithm is often sensitive to the choice of the initial parameter vector, efficient initialization is an important preliminary process for the future convergence of the algorithm to the best local maximum of the likelihood function. We propose a strategy initializing mean vectors by choosing points with higher concentrations of neighbors and using a truncated normal distribution for the preliminary estimation of dispersion matrices. The suggested approach is illustrated on examples and compared with several other initialization methods.