Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
Mathematics and Computers in Simulation - IMACS sponsored Special issue on the second IMACS seminar on Monte Carlo methods
Image Representations and Feature Selection for Multimedia Database Search
IEEE Transactions on Knowledge and Data Engineering
An introduction to variable and feature selection
The Journal of Machine Learning Research
Overfitting in making comparisons between variable selection methods
The Journal of Machine Learning Research
Random subspace method for multivariate feature selection
Pattern Recognition Letters
Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing)
Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing)
FS_SFS: A novel feature selection method for support vector machines
Pattern Recognition
A hybrid approach for feature subset selection using neural networks and ant colony optimization
Expert Systems with Applications: An International Journal
Transformations of symbolic data for continuous data oriented models
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Functional networks and analysis of variance for feature selection
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Neural-network feature selector
IEEE Transactions on Neural Networks
Hi-index | 12.05 |
In this paper, a new wrapper method for feature selection, namely IAFN-FS (Incremental ANalysis Of VAriance and Functional Networks for Feature Selection) is presented. The method uses as induction algorithm the AFN (ANOVA and Functional Networks) learning method; follows a backward non-sequential strategy from the complete set of features (thus allowing to discard several variables in one step, and so reducing computational time); and is able to consider ''multivariate'' relations between features. An important characteristic of the method is that it permits the user the interpretation of the results obtained, because the relevance of each feature selected or rejected is given in terms of its variance. IAFN-FS is applied to several benchmark real-world classification data sets showing adequate performance results. Also, a comparison with the results obtained by other wrapper methods is carried out, showing that the proposed method obtains better performance results in average.