Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
IEEE Transactions on Pattern Analysis and Machine Intelligence
Floating search methods in feature selection
Pattern Recognition Letters
Feature Selection: Evaluation, Application, and Small Sample Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tissue classification with gene expression profiles
RECOMB '00 Proceedings of the fourth annual international conference on Computational molecular biology
ECML '95 Proceedings of the 8th European Conference on Machine Learning
Feature selection for high-dimensional genomic microarray data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Simultaneous Relevant Feature Identification and Classification in High-Dimensional Spaces
WABI '02 Proceedings of the Second International Workshop on Algorithms in Bioinformatics
Signal Processing - Special issue: Genomic signal processing
ICDAR '95 Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
A Problem of Dimensionality: A Simple Example
IEEE Transactions on Pattern Analysis and Machine Intelligence
Subspace based feature selection for pattern recognition
Information Sciences: an International Journal
A rough set approach to feature selection based on ant colony optimization
Pattern Recognition Letters
Feature rating by random subspaces for functional brain mapping
BI'10 Proceedings of the 2010 international conference on Brain informatics
Combining functional networks and sensitivity analysis as wrapper method for feature selection
Expert Systems with Applications: An International Journal
HAIS'11 Proceedings of the 6th international conference on Hybrid artificial intelligent systems - Volume Part II
Semi-supervised classification based on random subspace dimensionality reduction
Pattern Recognition
A new hybrid ant colony optimization algorithm for feature selection
Expert Systems with Applications: An International Journal
Choosing parameters for random subspace ensembles for fMRI classification
MCS'10 Proceedings of the 9th international conference on Multiple Classifier Systems
An unsupervised approach to feature discretization and selection
Pattern Recognition
Diversity measures for one-class classifier ensembles
Neurocomputing
Using random subspace method for prediction and variable importance assessment in linear regression
Computational Statistics & Data Analysis
A high-dimensional two-sample test for the mean using random subspaces
Computational Statistics & Data Analysis
Diverse accurate feature selection for microarray cancer diagnosis
Intelligent Data Analysis
Hi-index | 0.10 |
In a growing number of domains data captured encapsulates as many features as possible. This poses a challenge to classical pattern recognition techniques, since the number of samples often still is limited with respect to the number of features. Classical pattern recognition methods suffer from the small sample size, and robust classification techniques are needed. In order to reduce the dimensionality of the feature space, the selection of informative features becomes an essential step towards the classification task. The relevance of the features can be evaluated either individually (univariate approaches), or in a multivariate manner. Univariate approaches are simple and fast, therefore appealing. However, possible correlation and dependencies between the features are not considered. Therefore, multivariate search techniques may be helpful. Several limitations restrict the use of multivariate searches. First, they are prone to overtraining, especially in p@?n (many features and few samples) settings. Secondly, they can be computationally too expensive when dealing with a large feature space. We introduce a new multivariate search technique, that is less sensitive to the noise in the data and computationally feasible as well. We compare our approach with several multivariate and univariate feature selection techniques, on an artificial dataset which provides us with ground truth information, and on a real dataset. The results show the importance of multivariate search techniques and the robustness and reliability of our novel multivariate feature selection method.