Feature Selection: Evaluation, Application, and Small Sample Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Deterministic annealing EM algorithm
Neural Networks
Data mining: concepts and techniques
Data mining: concepts and techniques
Minimum-Entropy Data Partitioning Using Reversible Jump Markov Chain Monte Carlo
IEEE Transactions on Pattern Analysis and Machine Intelligence
Using machine learning to improve information access
Using machine learning to improve information access
Simultaneous Feature Selection and Clustering Using Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bayesian Feature and Model Selection for Gaussian Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computational Methods of Feature Selection (Chapman & Hall/Crc Data Mining and Knowledge Discovery Series)
Clustering
Hi-index | 0.00 |
In clustering, most feature selection approaches account for all the features of the data to identify a single common feature subset contributing to the discovery of the interesting clusters However, many data can comprise multiple feature subsets, where each feature subset corresponds to the meaningful clusters differently In this paper, we attempt to reveal a feature partition consisting of multiple non-overlapped feature blocks that each one fits a finite mixture model To find the desired feature partition, we used a local search algorithm based on a Simulated Annealing technique During the process of searching for the optimal feature partition, reutilization of the previous estimation results has been adopted to reduce computational cost.