Error analysis in artificial neural networks: the imbalanced distribution case
SMO'08 Proceedings of the 8th conference on Simulation, modelling and optimization
Domains of Competence of Artificial Neural Networks Using Measures of Separability of Classes
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
IEEE Transactions on Evolutionary Computation
Information Sciences: an International Journal
Analysis of data complexity measures for classification
Expert Systems with Applications: An International Journal
Towards UCI+: A mindful repository design
Information Sciences: an International Journal
Domains of competence of the semi-naive Bayesian network classifiers
Information Sciences: an International Journal
Hi-index | 0.00 |
The k-nearest neighbors (k-NN) classifier is one of the most popular supervised classification methods. It is very simple, intuitive and accurate in a great variety of real-world domains. Nonetheless, despite its simplicity and effectiveness, practical use of this rule has been historically limited due to its high storage requirements and the computational costs involved. On the other hand, the performance of this classifier appears strongly sensitive to training data complexity. In this context, by means of several problem difficulty measures, we try to characterize the behavior of the k-NN rule when working under certain situations. More specifically, the present analysis focuses on the use of some data complexity measures to describe class overlapping, feature space dimensionality and class density, and discover their relation with the practical accuracy of this classifier.