Considerations about sample-size sensitivity of a family of editednearest-neighbor rules
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
The condensed nearest neighbor rule (Corresp.)
IEEE Transactions on Information Theory
Optimization of k nearest neighbor density estimates
IEEE Transactions on Information Theory
The condensed nearest neighbor rule using the concept of mutual nearest neighborhood (Corresp.)
IEEE Transactions on Information Theory
Hi-index | 22.14 |
In solving pattern recognition problems, many classification methods, such as the nearest-neighbor (NN) rule, need to determine prototypes from a training set. To improve the performance of these classifiers in finding an efficient set of prototypes, this paper introduces a training sample sequence planning method. In particular, by estimating the relative nearness of the training samples to the decision boundary, the approach proposed here incrementally increases the number of prototypes until the desired classification accuracy has been reached. This approach has been tested with a NN classification method and a neural network training approach. Studies based on both artificial and real data demonstrate that higher classification accuracy can be achieved with fewer prototypes.