Clustering Algorithms
Feature Weighting in k-Means Clustering
Machine Learning
Integrating constraints and metric learning in semi-supervised clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Clustering For Data Mining: A Data Recovery Approach (Chapman & Hall/Crc Computer Science)
Clustering For Data Mining: A Data Recovery Approach (Chapman & Hall/Crc Computer Science)
Automated Variable Weighting in k-Means Type Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Developing a feature weight self-adjustment mechanism for a K-means clustering algorithm
Computational Statistics & Data Analysis
Constrained Intelligent K-Means: Improving Results with Limited Previous Knowledge.
ADVCOMP '08 Proceedings of the 2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences
Data clustering: 50 years beyond K-means
Pattern Recognition Letters
Core Concepts in Data Analysis: Summarization, Correlation and Visualization
Core Concepts in Data Analysis: Summarization, Correlation and Visualization
Hi-index | 0.00 |
In this paper we introduce the Minkowski weighted partition around medoids algorithm (MW-PAM). This extends the popular partition around medoids algorithm (PAM) by automatically assigning K weights to each feature in a dataset, where K is the number of clusters. Our approach utilizes the within-cluster variance of features to calculate the weights and uses the Minkowski metric. We show through many experiments that MW-PAM, particularly when initialized with the Build algorithm (also using the Minkowski metric), is superior to other medoid-based algorithms in terms of both accuracy and identification of irrelevant features.