Advances in neural information processing systems 2
Neural Computation
Generalization in Neural Networks and Machine Learning
Generalization in Neural Networks and Machine Learning
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
The Journal of Machine Learning Research
An introduction to variable and feature selection
The Journal of Machine Learning Research
Overfitting in making comparisons between variable selection methods
The Journal of Machine Learning Research
Use of the zero norm with linear models and kernel methods
The Journal of Machine Learning Research
Neural Networks: A Comprehensive Foundation (3rd Edition)
Neural Networks: A Comprehensive Foundation (3rd Edition)
Bounds on Error Expectation for Support Vector Machines
Neural Computation
Hyperspectral imagery: clutter adaptation in anomaly detection
IEEE Transactions on Information Theory
Fast minimization of structural risk by nearest neighbor rule
IEEE Transactions on Neural Networks
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Biological pathways as features for microarray data classification
Proceedings of the 2nd international workshop on Data and text mining in bioinformatics
Consensus group stable feature selection
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Online selection of the best k-feature subset for object tracking
Journal of Visual Communication and Image Representation
Feature selection for high-dimensional imbalanced data
Neurocomputing
Stable Feature Selection with Minimal Independent Dominating Sets
Proceedings of the International Conference on Bioinformatics, Computational Biology and Biomedical Informatics
Hi-index | 0.00 |
We address feature selection problems for classification of small samples and high dimensionality. A practical example is microarray-based cancer classification problems, where sample size is typically less than 100 and number of features is several thousands or higher. One of the commonly used methods in addressing this problem is recursive feature elimination (RFE) method, which utilizes the generalization capability embedded in support vector machines and is thus suitable for small samples problems. We propose a novel method using minimum reference set (MRS) generated by the nearest neighbor rule. MRS is the set of minimum number of samples that correctly classify all the training samples. It is related to structural risk minimization principle and thus leads to good generalization. The proposed MRS based method is compared to RFE method with several real datasets, and experimental results show that the MRS method produces better classification performance.