Editing for the k-nearest neighbors rule by a genetic algorithm
Pattern Recognition Letters - Special issue on genetic algorithms
The handbook of brain theory and neural networks
Nearest neighbor classifier: simultaneous editing and feature selection
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Advances in Instance Selection for Instance-Based Learning Algorithms
Data Mining and Knowledge Discovery
A Probabilistic Classification System for Predicting the Cellular Localization Sites of Proteins
Proceedings of the Fourth International Conference on Intelligent Systems for Molecular Biology
Reference Set Thinning for the k-Nearest Neighbor Decision Rule
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
Tutorial on Practical Prediction Theory for Classification
The Journal of Machine Learning Research
Fast condensed nearest neighbor rule
ICML '05 Proceedings of the 22nd international conference on Machine learning
Using a genetic algorithm for editing k-nearest neighbor classifiers
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
The condensed nearest neighbor rule (Corresp.)
IEEE Transactions on Information Theory
The reduced nearest neighbor rule (Corresp.)
IEEE Transactions on Information Theory
Proceedings of Reisensburg 2011
Computational Statistics
Hi-index | 0.00 |
The $$k$$-Nearest Neighbour classifier is widely used and popular due to its inherent simplicity and the avoidance of model assumptions. Although the approach has been shown to yield a near-optimal classification performance for an infinite number of samples, a selection of the most decisive data points can improve the classification accuracy considerably in real settings with a limited number of samples. At the same time, a selection of a subset of representative training samples reduces the required amount of storage and computational resources. We devised a new approach that selects a representative training subset on the basis of an evolutionary optimization procedure. This method chooses those training samples that have a strong influence on the correct prediction of other training samples, in particular those that have uncertain labels. The performance of the algorithm is evaluated on different data sets. Additionally, we provide graphical examples of the selection procedure.