Feature Selection for Clustering - A Filter Solution
ICDM '02 Proceedings of the 2002 IEEE International Conference on Data Mining
Multiclass Spectral Clustering
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Feature Selection for Unsupervised Learning
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Feature Selection for Clustering on High Dimensional Data
PRICAI '08 Proceedings of the 10th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
Local Kernel Regression Score for Selecting Features of High-Dimensional Data
IEEE Transactions on Knowledge and Data Engineering
Kernel Learning for Local Learning Based Clustering
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Feature selection for transfer learning
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
Hi-index | 0.00 |
For most clustering algorithms, their performance will strongly depend on the data representation. In this paper, we attempt to obtain better data representations through feature selection, particularly for the Local Learning based Clustering (LLC) [1]. We assign a weight to each feature, and incorporate it into the built-in regularization of LLC algorithm to take into account of the relevance of each feature for the clustering. Accordingly, the weights are estimated iteratively with the clustering. We show that the resulting weighted regularization with an additional constraint on the weights is equivalent to a known sparse-promoting penalty, thus the weights for irrelevant features can be driven towards zero. Experiments on several benchmark datasets demonstrate the effectiveness of the proposed method.