Estimating attributes: analysis and extensions of RELIEF
ECML-94 Proceedings of the European conference on machine learning on Machine Learning
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Feature Extraction Based on Decision Boundaries
IEEE Transactions on Pattern Analysis and Machine Intelligence
An adaptation of Relief for attribute estimation in regression
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Theoretical and Empirical Analysis of ReliefF and RReliefF
Machine Learning
Neighborhood Preserving Embedding
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Iterative RELIEF for Feature Weighting: Algorithms, Theories, and Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
Locality sensitive semi-supervised feature selection
Neurocomputing
Unsupervised feature selection for multi-cluster data
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Local-Learning-Based Feature Selection for High-Dimensional Data Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust relief-feature weighting, margin maximization, and fuzzy optimization
IEEE Transactions on Fuzzy Systems
Hi-index | 0.10 |
Feature selection methods have been extensively applied in machine learning tasks, such as computer vision, pattern recognition, and data mining. These methods aim to identify a subset of the original features with high discriminating power. Among them, the feature selection technique for unsupervised tasks is more attractive since the cost to obtain the labels of the data and/or the information between classes is often high. On the other hand, the low-dimensional manifold of the ''same'' class data is usually revealed by considering the local invariance of the data structure, it may not be adequate to deal with unsupervised tasks where the class information is completely absent. In this paper, a novel feature selection method, called Minimum-maximum local structure information Laplacian Score (MMLS), is proposed to minimize the within-locality information (i.e., preserving the manifold structure of the ''same'' class data) and to maximize the between-locality information (i.e., maximizing the information between the manifold structures of the ''different'' class data) at the same time. The effectiveness of the proposed algorithm is demonstrated with experiments on classification and clustering.