Handwritten digit recognition with a back-propagation network
Advances in neural information processing systems 2
A well-conditioned estimator for large-dimensional covariance matrices
Journal of Multivariate Analysis
Bayesian Regularization for Normal Mixture Estimation and Model-Based Clustering
Journal of Classification
The Journal of Machine Learning Research
First-Order Methods for Sparse Covariance Selection
SIAM Journal on Matrix Analysis and Applications
Hi-index | 0.00 |
Finite gaussian mixture models are widely used in statistics thanks to their great flexibility. However, parameter estimation for gaussian mixture models with high dimensionality can be challenging because of the large number of parameters that need to be estimated. In this letter, we propose a penalized likelihood estimator to address this difficulty. The -type penalty we impose on the inverse covariance matrices encourages sparsity on its entries and therefore helps to reduce the effective dimensionality of the problem. We show that the proposed estimate can be efficiently computed using an expectation-maximization algorithm. To illustrate the practical merits of the proposed method, we consider its applications in model-based clustering and mixture discriminant analysis. Numerical experiments with both simulated and real data show that the new method is a valuable tool for high-dimensional data analysis.