Inferring Meta-covariates in Classification
PRIB '09 Proceedings of the 4th IAPR International Conference on Pattern Recognition in Bioinformatics
Model-based clustering by probabilistic self-organizing maps
IEEE Transactions on Neural Networks
Probabilistic modeling of traffic lanes from GPS traces
Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems
Full wave form analysis for long-range 3D imaging laser radar
EURASIP Journal on Advances in Signal Processing - Special issue on advanced image processing for defense and security applications
Regularized parameter estimation in high-dimensional gaussian mixture models
Neural Computation
Intrinsic dimension estimation by maximum likelihood in isotropic probabilistic PCA
Pattern Recognition Letters
Root selection in normal mixture models
Computational Statistics & Data Analysis
Journal of Multivariate Analysis
A LASSO-penalized BIC for mixture model selection
Advances in Data Analysis and Classification
Hi-index | 0.00 |
Normal mixture models are widely used for statistical modeling of data, including cluster analysis.However maximum likelihood estimation (MLE) for normal mixtures using the EM algorithm may fail as the result of singularities or degeneracies. To avoid this, we propose replacing the MLE by a maximum a posteriori (MAP) estimator, also found by the EM algorithm. For choosing the number of components and the model parameterization, we propose a modified version of BIC, where the likelihood is evaluated at the MAP instead of the MLE. We use a highly dispersed proper conjugate prior, containing a small fraction of one observation's worth of information. The resulting method avoids degeneracies and singularities, but when these are not present it gives similar results to the standard method using MLE, EM and BIC.