Stratification for scaling up evolutionary prototype selection
Pattern Recognition Letters
A novel gray-based reduced NN classification method
Pattern Recognition
Prototype reduction schemes applicable for non-stationary data sets
Pattern Recognition
Mining competent case bases for case-based reasoning
Artificial Intelligence
An association-based case reduction technique for case-based reasoning
Information Sciences: an International Journal
Template Selection by Editing Algorithms: A Case Study in Face Recognition
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Rough-fuzzy weighted k-nearest leader classifier for large data sets
Pattern Recognition
Artificial neural networks with evolutionary instance selection for financial forecasting
Expert Systems with Applications: An International Journal
A class boundary preserving algorithm for data condensation
Pattern Recognition
Adaptive case-based reasoning using retention and forgetting strategies
Knowledge-Based Systems
Information Sciences: an International Journal
Pattern Recognition Letters
New rank methods for reducing the size of the training set using the nearest neighbor rule
Pattern Recognition Letters
Profiling instances in noise reduction
Knowledge-Based Systems
Efficient dataset size reduction by finding homogeneous clusters
Proceedings of the Fifth Balkan Conference in Informatics
Preceding rule induction with instance reduction methods
MLDM'13 Proceedings of the 9th international conference on Machine Learning and Data Mining in Pattern Recognition
Using reinforcement learning to find an optimal set of features
Computers & Mathematics with Applications
On the use of meta-learning for instance selection: An architecture and an experimental study
Information Sciences: an International Journal
Hi-index | 754.84 |
A procedure is introduced to approximate nearest neighbor (INN) decision boundaries. The algorithm produces a selective subset of the original data so that 1) the subset is consistent, 2) the distance between any sample and its nearest selective neighbor is less than the distance from the sample to any sample of the other class, and 3) the subset is the smallest possible.