Editing for the k-nearest neighbors rule by a genetic algorithm
Pattern Recognition Letters - Special issue on genetic algorithms
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
ACM Computing Surveys (CSUR)
Complexity Measures of Supervised Classification Problems
IEEE Transactions on Pattern Analysis and Machine Intelligence
On Classifier Domains of Competence
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
Finding Prototypes For Nearest Neighbor Classifiers
IEEE Transactions on Computers
Modeling Problem Transformations based on Data Complexity
Proceedings of the 2007 conference on Artificial Intelligence Research and Development
Domains of Competence of Artificial Neural Networks Using Measures of Separability of Classes
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Using a genetic algorithm for editing k-nearest neighbor classifiers
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
A fast computation of inter-class overlap measures using prototype reduction schemes
Canadian AI'08 Proceedings of the Canadian Society for computational studies of intelligence, 21st conference on Advances in artificial intelligence
IDEAL'09 Proceedings of the 10th international conference on Intelligent data engineering and automated learning
Information Sciences: an International Journal
Information Sciences: an International Journal
Analysis of data complexity measures for classification
Expert Systems with Applications: An International Journal
Domains of competence of the semi-naive Bayesian network classifiers
Information Sciences: an International Journal
Robust classification of imbalanced data using one-class and two-class SVM-based multiclassifiers
Intelligent Data Analysis - Business Analytics and Intelligent Optimization
Hi-index | 0.00 |
The Nearest Neighbor classifier is one of the most popular supervised classification methods. It is very simple, intuitive and accurate in a great variety of real-world applications. Despite its simplicity and effectiveness, practical use of this rule has been historically limited due to its high storage requirements and the computational costs involved, as well as the presence of outliers. In order to overcome these drawbacks, it is possible to employ a suitable prototype selection scheme, as a way of storage and computing time reduction and it usually provides some increase in classification accuracy. Nevertheless, in some practical cases prototype selection may even produce a degradation of the classifier effectiveness. From an empirical point of view, it is still difficult to know a priori when this method will provide an appropriate behavior. The present paper tries to predict how appropriate a prototype selection algorithm will result when applied to a particular problem, by characterizing data with a set of complexity measures.