Sample-based non-uniform random variate generation
WSC '86 Proceedings of the 18th conference on Winter simulation
Accelerating EM for Large Databases
Machine Learning
Accelerating EM clustering to find high-quality solutions
Knowledge and Information Systems
SMEM Algorithm for Mixture Models
Neural Computation
Fast Parallel Expectation Maximization for Gaussian Mixture Models on GPUs Using CUDA
HPCC '09 Proceedings of the 2009 11th IEEE International Conference on High Performance Computing and Communications
Bayesian Classifiers Programmed in SQL
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.00 |
Clustering is a fundamental problem in statistics and machine learning, whose solution is commonly computed by the Expectation-Maximization (EM) method, which finds a locally optimal solution for an objective function called log-likelihood. Since the surface of the log-likelihood function is non convex, a stochastic search with Markov Chain Monte Carlo (MCMC) methods can help escaping locally optimal solutions. In this article, we tackle two fundamental conflicting goals: Finding higher quality solutions and achieving faster convergence. With that motivation in mind, we introduce an efficient algorithm that combines elements of the EM and MCMC methods to find clustering solutions that are qualitatively better than those found by the standard EM method. Moreover, our hybrid algorithm allows tuning model parameters and understanding the uncertainty in their estimation. The main issue with MCMC methods is that they generally require a very large number of iterations to explore the posterior of each model parameter. Convergence is accelerated by several algorithmic improvements which include sufficient statistics, simplified model parameter priors, fixing covariance matrices and iterative sampling from small blocks of the data set. A brief experimental evaluation shows promising results.