Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
An introduction to variable and feature selection
The Journal of Machine Learning Research
Benefitting from the variables that variable selection discards
The Journal of Machine Learning Research
An extensive empirical study of feature selection metrics for text classification
The Journal of Machine Learning Research
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
A pitfall and solution in multi-class feature selection for text classification
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Pattern Recognition, Third Edition
Pattern Recognition, Third Edition
Feature extraction for one-class classification
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Application of majority voting to pattern recognition: an analysis of its behavior and performance
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.00 |
Wrapper feature selection methods are typically used in multi-class classification problems to determine which feature subspace maximizes the patterns discriminative potential, with respect to the global multi-class scope. However, in most classification tasks, some classes are more easily discriminated than others due to particularly predictive features. Thus the global class set may stand as a hard restriction when performing feature selection. We propose a class pairwise approach, in which the wrapper feature selection framework is applied with the purpose of determining the feature subspaces with higher discriminative potential for each class pair. This method is shown to provide simpler models, reduced number of features, higher scalability, and in some cases even improve the classification performance.