A note on genetic algorithms for large-scale feature selection
Pattern Recognition Letters
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Floating search methods in feature selection
Pattern Recognition Letters
Feature Selection: Evaluation, Application, and Small Sample Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Adaptive floating search methods in feature selection
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Feature selection with neural networks
Pattern Recognition Letters
Fast Branch & Bound Algorithms for Optimal Feature Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Adaptive branch and bound algorithm for selecting optimal features
Pattern Recognition Letters
A Direct Method of Nonparametric Measurement Selection
IEEE Transactions on Computers
On the effectiveness of receptors in recognition systems
IEEE Transactions on Information Theory
Bagging Constraint Score for feature selection with pairwise constraints
Pattern Recognition
Information Sciences: an International Journal
Growing Seed Genes from Time Series Data and Thresholded Boolean Networks with Perturbation
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Robust pixel-based classification of obstacles for robotic harvesting of sweet-pepper
Computers and Electronics in Agriculture
A survey on feature selection methods
Computers and Electrical Engineering
Hi-index | 0.01 |
A new improved forward floating selection (IFFS) algorithm for selecting a subset of features is presented. Our proposed algorithm improves the state-of-the-art sequential forward floating selection algorithm. The improvement is to add an additional search step called ''replacing the weak feature'' to check whether removing any feature in the currently selected feature subset and adding a new one at each sequential step can improve the current feature subset. Our method provides the optimal or quasi-optimal (close to optimal) solutions for many selected subsets and requires significantly less computational load than optimal feature selection algorithms. Our experimental results for four different databases demonstrate that our algorithm consistently selects better subsets than other suboptimal feature selection algorithms do, especially when the original number of features of the database is large.