Instance-Based Learning Algorithms
Machine Learning
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Mining high-speed data streams
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Feature subset selection by Bayesian network-based optimization
Artificial Intelligence
An introduction to variable and feature selection
The Journal of Machine Learning Research
Accurate decision trees for mining high-speed data streams
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Fast Branch & Bound Algorithms for Optimal Feature Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient Feature Selection via Analysis of Relevance and Redundancy
The Journal of Machine Learning Research
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Individual attribute prior setting methods for naïve Bayesian classifiers
Pattern Recognition
Improving incremental wrapper-based feature subset selection by using re-ranking
IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part I
Feature Selection Based on Class-Dependent Densities for High-Dimensional Binary Data
IEEE Transactions on Knowledge and Data Engineering
A hybrid discretization method for naïve Bayesian classifiers
Pattern Recognition
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Inference-Based Naïve Bayes: Turning Naïve Bayes Cost-Sensitive
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.00 |
This paper deals with the problem of wrapper feature subset selection (FSS) in classification-oriented datasets with a (very) large number of attributes. In high-dimensional datasets with thousands of variables, wrapper FSS becomes a laborious computational process because of the amount of CPU time it requires. In this paper we study how under certain circumstances the wrapper FSS process can be speeded up by embedding the classifier into the wrapper algorithm, instead of dealing with it as a black-box. Our proposal is based on the combination of the NB classifier (which is known to be largely beneficial for FSS) with incremental wrapper FSS algorithms. The merit of this approach is analyzed both theoretically and experimentally, and the results show an impressive speed-up for the embedded FSS process.