Estimating attributes: analysis and extensions of RELIEF
ECML-94 Proceedings of the European conference on machine learning on Machine Learning
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Unsupervised Feature Selection Using Feature Similarity
IEEE Transactions on Pattern Analysis and Machine Intelligence
Input Feature Selection by Mutual Information Based on Parzen Window
IEEE Transactions on Pattern Analysis and Machine Intelligence
Cluster ensembles --- a knowledge reuse framework for combining multiple partitions
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Spectral feature selection for supervised and unsupervised learning
Proceedings of the 24th international conference on Machine learning
Gene Selection Using Wilcoxon Rank Sum Test and Support Vector Machine for Cancer Classification
Computational Intelligence and Security
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Normalized mutual information feature selection
IEEE Transactions on Neural Networks
Unsupervised feature selection for multi-cluster data
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
SAINT '10 Proceedings of the 2010 10th IEEE/IPSJ International Symposium on Applications and the Internet
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
BioDM'06 Proceedings of the 2006 international conference on Data Mining for Biomedical Applications
Densest subgraph in streaming and MapReduce
Proceedings of the VLDB Endowment
l2,1-norm regularized discriminative feature selection for unsupervised learning
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Using mutual information for selecting features in supervised neural net learning
IEEE Transactions on Neural Networks
Hi-index | 0.10 |
In this article a dense subgraph finding approach is adopted for the unsupervised feature selection problem. The feature set of a data is mapped to a graph representation with individual features constituting the vertex set and inter-feature mutual information denoting the edge weights. Feature selection is performed in a two-phase approach where the densest subgraph is first obtained so that the features are maximally non-redundant among each other. Finally, in the second stage, feature clustering around the non-redundant features is performed to produce the reduced feature set. An approximation algorithm is used for the densest subgraph finding. Empirically, the proposed approach is found to be competitive with several state of art unsupervised feature selection algorithms.