Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Floating search methods in feature selection
Pattern Recognition Letters
Feature Selection: Evaluation, Application, and Small Sample Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
An improved branch and bound algorithm for feature selection
Pattern Recognition Letters
Fast Branch & Bound Algorithms for Optimal Feature Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Direct Method of Nonparametric Measurement Selection
IEEE Transactions on Computers
A Branch and Bound Algorithm for Feature Subset Selection
IEEE Transactions on Computers
On the effectiveness of receptors in recognition systems
IEEE Transactions on Information Theory
An improvement on floating search algorithms for feature subset selection
Pattern Recognition
A new feature selection algorithm for multispectral and polarimetric vehicle images
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Towards automatic detection of atrial fibrillation: A hybrid computational approach
Computers in Biology and Medicine
Expert Systems with Applications: An International Journal
Efficient feature selection filters for high-dimensional data
Pattern Recognition Letters
Engineering Applications of Artificial Intelligence
Hi-index | 0.10 |
We propose a new adaptive branch and bound algorithm for selecting the optimal subset of features in pattern recognition applications. The algorithm improves the search speed by avoiding unnecessary criterion function calculations at nodes in the solution tree. Our algorithm includes the following new properties: (i) ordering the tree nodes by the significance of features during construction of the tree, (ii) obtaining a large ''good'' initial bound by a floating search method, (iii) a new method to select an initial starting search level in the tree, and (iv) a new adaptive jump search strategy to select subsequent search levels to avoid redundant criterion function calculations. Our experimental results for four different databases demonstrate that our method is significantly faster than other versions of the branch and bound algorithm when the database has more than 30 features.