Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Machine Learning
SLIQ: A Fast Scalable Classifier for Data Mining
EDBT '96 Proceedings of the 5th International Conference on Extending Database Technology: Advances in Database Technology
Feature selection for high-dimensional genomic microarray data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
On Feature Selection: Learning with Exponentially Many Irrelevant Features as Training Examples
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Filters, Wrappers and a Boosting-Based Hybrid for Feature Selection
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
CSB '03 Proceedings of the IEEE Computer Society Conference on Bioinformatics
Feature Selection for Support Vector Machines by Means of Genetic Algorithms
ICTAI '03 Proceedings of the 15th IEEE International Conference on Tools with Artificial Intelligence
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Feature selection for classifying high-dimensional numerical data
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Combined kernel function approach in SVM for diagnosis of cancer
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Automatic feature selection for anomaly detection
Proceedings of the 1st ACM workshop on Workshop on AISec
Hi-index | 0.00 |
Determining the relevant features is a combinatorial task in various fields of machine learning such as text mining, bioinformatics, pattern recognition, etc. Several scholars have developed various methods to extract the relevant features but no method is really superior. Breiman proposed Random Forest to classify a pattern based on CART tree algorithm and his method turns out good results compared to other classifiers. Taking advantages of Random Forest and using wrapper approach which was first introduced by Kohavi et. al, we propose an algorithm named Dynamic Recursive Feature Elimination (DRFE) to find the optimal subset of features for reducing noise of the data and increasing the performance of classifiers. In our method, we use Random Forest as induced classifier and develop our own defined feature elimination function by adding extra terms to the feature scoring. We conducted experiments with two public datasets: Colon cancer and Leukemia cancer. The experimental results of the real world data showed that the proposed method has higher prediction rate compared to the baseline algorithm. The obtained results are comparable and sometimes have better performance than the widely used classification methods in the same literature of feature selection.