A Classification EM algorithm for clustering and two stochastic versions
Computational Statistics & Data Analysis - Special issue on optimization techniques in statistics
Unsupervised texture segmentation using Gabor filters
Pattern Recognition
Convergence of an EM-type algorithm for spatial clustering
Pattern Recognition Letters
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Fast spatial clustering with different metrics and in the presence of obstacles
Proceedings of the 9th ACM international symposium on Advances in geographic information systems
Opening the black box: interactive hierarchical clustering for multivariate spatial patterns
Proceedings of the 10th ACM international symposium on Advances in geographic information systems
CLARANS: A Method for Clustering Objects for Spatial Data Mining
IEEE Transactions on Knowledge and Data Engineering
Spatial Clustering in the Presence of Obstacles
Proceedings of the 17th International Conference on Data Engineering
On convergence properties of the em algorithm for gaussian mixtures
Neural Computation
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Self-organizing mixture networks for probability density estimation
IEEE Transactions on Neural Networks
Editorial: Advances in Mixture Models
Computational Statistics & Data Analysis
A hybrid spatial data clustering method for site selection: The data driven approach of GIS mining
Expert Systems with Applications: An International Journal
Hi-index | 0.03 |
Spatial clustering requires consideration of spatial information and this makes expectation-maximization (EM) algorithm that maximizes likelihood alone inappropriate. Although neighborhood EM (NEM) algorithm incorporates a spatial penalty term, it needs much more iterations for E-step. To incorporate spatial information while avoiding much additional computation, we propose a hybrid EM (HEM) approach that combines EM and NEM. Early training is performed via a selective hard EM till the penalized likelihood criterion begins to decrease. Then training is turned to NEM, which runs only one iteration of E-step and plays a role of finer tuning. Thus spatial information is incorporated throughout HEM and the computational complexity is also comparable to EM. Empirical results show that a few more passes are needed in HEM to converge after switching to NEM and the final clustering quality is close to or slightly better than standard NEM.