A counterexample to Tomek's consistency theorem for a condensed nearest neighbor decision rule
Pattern Recognition Letters
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Machine Learning
Voting over Multiple Condensed Nearest Neighbors
Artificial Intelligence Review - Special issue on lazy learning
Analysis of new techniques to obtain quality training sets
Pattern Recognition Letters - Special issue: Sibgrapi 2001
Finding Prototypes For Nearest Neighbor Classifiers
IEEE Transactions on Computers
Nearest prototype classification: clustering, genetic algorithms, or random search?
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Considerations about sample-size sensitivity of a family of editednearest-neighbor rules
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
The condensed nearest neighbor rule (Corresp.)
IEEE Transactions on Information Theory
The reduced nearest neighbor rule (Corresp.)
IEEE Transactions on Information Theory
The condensed nearest neighbor rule using the concept of mutual nearest neighborhood (Corresp.)
IEEE Transactions on Information Theory
Prototype selection based on sequential search
Intelligent Data Analysis
A review of instance selection methods
Artificial Intelligence Review
Pattern Recognition Letters
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
Hi-index | 0.10 |
This paper presents a new method based on divide-and-conquer approach to the selection and replacement of a set of prototypes from the training set for the nearest neighbor rule. This method aims at reducing the computational time and the memory space as well as the sensitivity of the order and the noise of the training data. A reduced prototype set contains Pairwise Opposite Class-Nearest Neighbor (POC-NN) prototypes which are close to the decision boundary and used instead of the training patterns. POC-NN prototypes are obtained by recursively iterative separation and analysis of the training data into two regions until each region is correctly grouped and classified. The separability is determined by the POC-NN prototypes essential to define the locations of all separating hyperplanes. Our method is fast and order independent. The number of prototypes and the overfitting of the model can be reduced by the user. The experimental results signify the effectiveness of this technique and its performance in both accuracy and prototype rate as well as in training time to those obtained by classical nearest neighbor techniques.