Feature subset selection by Bayesian network-based optimization
Artificial Intelligence
Feature Selection for Knowledge Discovery and Data Mining
Feature Selection for Knowledge Discovery and Data Mining
Evolutionary Algorithms for Solving Multi-Objective Problems
Evolutionary Algorithms for Solving Multi-Objective Problems
AIME '01 Proceedings of the 8th Conference on AI in Medicine in Europe: Artificial Intelligence Medicine
Analysis of new variable selection methods for discriminant analysis
Computational Statistics & Data Analysis
Multi-objective Feature Selection with NSGA II
ICANNGA '07 Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part I
International Journal of Intelligent Systems
Feature selection in bankruptcy prediction
Knowledge-Based Systems
A generic multi-dimensional feature extraction method using multiobjective genetic programming
Evolutionary Computation
Information Processing Letters
Expert Systems with Applications: An International Journal
IEEE Transactions on Fuzzy Systems
Feature analysis and classification of protein secondary structure data
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Simple instance selection for bankruptcy prediction
Knowledge-Based Systems
Bankruptcy prediction models based on multinorm analysis: An alternative to accounting ratios
Knowledge-Based Systems
Feature selection using rough entropy-based uncertainty measures in incomplete decision systems
Knowledge-Based Systems
Expert Systems with Applications: An International Journal
Feature selection using dynamic weights for classification
Knowledge-Based Systems
Hi-index | 0.00 |
This works deals with the problem of selecting variables (features) that are subsequently used in discriminant analysis. The aim is to find, from a set of m variables, smaller subsets which enable an efficient classification of cases in two classes. We consider two objectives, each one associated with the misclassification error in each class (type I and type II errors). Thus, we establish a bi-objective problem and develop an algorithm based on the NSGA-II strategy to this specific problem, in order to obtain a set of non-dominated solutions. Managing these two objectives separately (and not jointly) allows an enhanced analysis of the obtained solutions by observing the approach to efficient frontier. This is especially significant when each type of error has a different level of importance or when they cannot be compared. To illustrate these issues, several known databases from literature are used, as well as an additional database with several Spanish firms featured by financial variables and two classes: ''creditworthy'' and ''non-creditworthy''. Finally, we show that when solutions obtained by our NSGA-II implementation are evaluated from the classic mono-objective perspective (minimizing the ratio of both error types jointly) they are better than those obtained by classic methods for feature selection and similar than those provided by other recently published methods.