Visualization and interactive feature selection for unsupervised data
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Knowledge Acquisition Via Incremental Conceptual Clustering
Machine Learning
Feature Selection and Incremental Learning of Probabilistic Concept Hierarchies
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Simultaneous Feature Selection and Clustering Using Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
Automated Variable Weighting in k-Means Type Clustering
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Knowledge and Data Engineering
Bayesian Feature and Model Selection for Gaussian Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Recently, the Rival Penalized Expectation-Maximization (RPEM) algorithm (Cheung 2004 & 2005) has demonstrated its outstanding capability to perform the model selection automatically in the context of density mixture models. Nevertheless, the RPEM is unable to exclude the irrelevant variables (also called features) from the clustering process, which may degrade the algorithm's performance. In this paper, we adopt the concept of feature salience (Law et al. 2004) as the feature weight to measure the relevance of features to the cluster structure in the subspace, and integrate it into the RPEM algorithm. The proposed algorithm identifies the irrelevant features and estimates the number of clusters automatically and simultaneously in a single learning paradigm. Experiments show the efficacy of the proposed algorithm on both synthetic and benchmark real data sets.