A note on genetic algorithms for large-scale feature selection
Pattern Recognition Letters
C4.5: programs for machine learning
C4.5: programs for machine learning
Floating search methods in feature selection
Pattern Recognition Letters
Divergence Based Feature Selection for Multimodal Class Densities
IEEE Transactions on Pattern Analysis and Machine Intelligence
Classifier-Independent Feature Selection For Two-Stage Feature Selection
SSPR '98/SPR '98 Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Classifier-Independent Feature Selection Based on Non-parametric Discriminant Analysis
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Hi-index | 0.00 |
Feature selection aims to find the most important feature subset from a given feature set without degradation of discriminative information. In general, we wish to select a feature subset that is effective for any kind of classifier. Such studies are called Classifier-Independent Feature Selection, and Novovičová et al.'s method is one of them. Their method estimates the densities of classes with Gaussian mixture models, and selects a feature subset using Kullback-Leibler divergence between the estimated densities, but there is no indication how to choose the number of features to be selected. Kudo and Sklansky (1997) suggested the selection of a minimal feature subset such that the degree of degradation of performance is guaranteed. In this study, based on their suggestion, we try to find a feature subset that is minimal while maintainig a given Kullback-Leibler divergence.