Unknown attribute values in induction
Proceedings of the sixth international workshop on Machine learning
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Data preparation for data mining
Data preparation for data mining
When and how to subsample: report on the KDD-2001 panel
ACM SIGKDD Explorations Newsletter
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Evaluating Feature Selection Methods for Learning in Data Mining Applications
HICSS '98 Proceedings of the Thirty-First Annual Hawaii International Conference on System Sciences-Volume 5 - Volume 5
Feature Selection Algorithms: A Survey and Experimental Evaluation
ICDM '02 Proceedings of the 2002 IEEE International Conference on Data Mining
An introduction to variable and feature selection
The Journal of Machine Learning Research
Ranking a random feature for variable and feature selection
The Journal of Machine Learning Research
Data mining in soft computing framework: a survey
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Feature or attribute selection is a crucial activity when knowledge discovery is applied to very large databases. Its main objective is to eliminate irrelevant or redundant attributes to obtain a computationally tractable problem, without affecting the classification quality. In this article a novel optimization approach is evaluated. This method uses concave programming to minimize the number of attributes to input to the mining algorithm and also, to minimize the classification error. This technique is evaluated using a billing data base from the national electric utility in Mexico. The results are compared against those obtained by traditional techniques. From this experimentation, several improvements to the optimization approach are suggested.