Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
Machine Learning
Self-Organizing Maps
Automatic Design of Multiple Classifier Systems by Unsupervised Learning
MLDM '99 Proceedings of the First International Workshop on Machine Learning and Data Mining in Pattern Recognition
Learning Ensembles from Bites: A Scalable and Accurate Approach
The Journal of Machine Learning Research
A modular k-nearest neighbor classification method for massively parallel text categorization
CIS'04 Proceedings of the First international conference on Computational and Information Science
Structure pruning strategies for min-max modular network
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
IEEE Transactions on Neural Networks
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
Hi-index | 0.00 |
A difficulty faced by existing reduction techniques for k-NN algorithm is to require loading the whole training data set. Therefore, these approaches often become inefficient when they are used for solving large-scale problems. To overcome this deficiency, we propose a new method for reducing samples for k-NN algorithm. The basic idea behind the proposed method is a self-recombination learning strategy, which is originally designed for combining classifiers to speed up response time by reducing the number of base classifiers to be checked and improve the generalization performance by rearranging the order of training samples. Experimental results on several benchmark problems indicate that the proposed method is valid and efficient.