Feature subset selection by Bayesian network-based optimization
Artificial Intelligence
Feature Selection for Knowledge Discovery and Data Mining
Feature Selection for Knowledge Discovery and Data Mining
AIME '01 Proceedings of the 8th Conference on AI in Medicine in Europe: Artificial Intelligence Medicine
Parallel algorithms for computing all possible subset regression models using the QR decomposition
Parallel Computing - Special issue: Parallel computing in numerical optimization
Automatic digital modulation recognition using artificial neural network and genetic algorithm
Signal Processing - Special issue on independent components analysis and beyond
A Branch and Bound Algorithm for Feature Subset Selection
IEEE Transactions on Computers
Using learning to facilitate the evolution of features for recognizing visual concepts
Evolutionary Computation
Feature analysis and classification of protein secondary structure data
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Feature subset selection by genetic algorithms and estimation of distribution algorithms
Artificial Intelligence in Medicine
A probabilistic heuristic for a computationally difficult set covering problem
Operations Research Letters
Artificial Intelligence in Medicine
Exact and approximate algorithms for variable selection in linear discriminant analysis
Computational Statistics & Data Analysis
A GRASP method for building classification trees
Expert Systems with Applications: An International Journal
Direct variable selection for discrimination among several groups
Journal of Multivariate Analysis
Computational Statistics & Data Analysis
Bi-objective feature selection for discriminant analysis in two-class classification
Knowledge-Based Systems
Hi-index | 0.03 |
Several methods to select variables that are subsequently used in discriminant analysis are proposed and analysed. The aim is to find from among a set of m variables a smaller subset which enables an efficient classification of cases. Reducing dimensionality has some advantages such as reducing the costs of data acquisition, better understanding of the final classification model, and an increase in the efficiency and efficacy of the model itself. The specific problem consists in finding, for a small integer value of p, the size p subset of original variables that yields the greatest percentage of hits in the discriminant analysis. To solve this problem a series of techniques based on metaheuristic strategies is proposed. After performing some test it is found that they obtain significantly better results than the stepwise, backward or forward methods used by classic statistical packages. The way these methods work is illustrated with several examples.