Communications of the ACM - Special issue on parallelism
Instance-Based Learning Algorithms
Machine Learning
Voting over Multiple Condensed Nearest Neighbors
Artificial Intelligence Review - Special issue on lazy learning
Multidimensional access methods
ACM Computing Surveys (CSUR)
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
Advances in Instance Selection for Instance-Based Learning Algorithms
Data Mining and Knowledge Discovery
Artificial Intelligence Review - Special issue on lazy learning
Combining Nearest Neighbor Classifiers Through Multiple Feature Subsets
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Reference Set Thinning for the k-Nearest Neighbor Decision Rule
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
Fast condensed nearest neighbor rule
ICML '05 Proceedings of the 22nd international conference on Machine learning
Fast minimization of structural risk by nearest neighbor rule
IEEE Transactions on Neural Networks
Hit Miss Networks with Applications to Instance Selection
The Journal of Machine Learning Research
Rough-fuzzy weighted k-nearest leader classifier for large data sets
Pattern Recognition
Prototype-based Domain Description
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
The Good, the Bad and the Incorrectly Classified: Profiling Cases for Case-Base Editing
ICCBR '09 Proceedings of the 8th International Conference on Case-Based Reasoning: Case-Based Reasoning Research and Development
A Scalable Noise Reduction Technique for Large Case-Based Systems
ICCBR '09 Proceedings of the 8th International Conference on Case-Based Reasoning: Case-Based Reasoning Research and Development
Scaling up support vector machines using nearest neighbor condensation
IEEE Transactions on Neural Networks
Noise reduction for instance-based learning with a local maximal margin approach
Journal of Intelligent Information Systems
A class boundary preserving algorithm for data condensation
Pattern Recognition
Adaptive case-based reasoning using retention and forgetting strategies
Knowledge-Based Systems
An instance selection algorithm based on reverse nearest neighbor
PAKDD'11 Proceedings of the 15th Pacific-Asia conference on Advances in knowledge discovery and data mining - Volume Part I
Editorial: Large scale instance selection by means of federal instance selection
Data & Knowledge Engineering
Fast instance selection for speeding up support vector machines
Knowledge-Based Systems
FRPS: A Fuzzy Rough Prototype Selection method
Pattern Recognition
A novel prototype generation technique for handwriting digit recognition
Pattern Recognition
Linear reconstruction measure steered nearest neighbor classification framework
Pattern Recognition
On the use of meta-learning for instance selection: An architecture and an experimental study
Information Sciences: an International Journal
Hi-index | 0.00 |
This work has two main objectives, namely, to introduce a novel algorithm, called the Fast Condensed Nearest Neighbor (FCNN) rule, for computing a training set consistent subset for the nearest neighbor decision rule, and to show that condensation algorithms for the nearest neighbor rule can be applied to huge collections of data. The FCNN rule has some interesting properties: it is order independent, its worst case time complexity is quadratic but often with a small constant pre-factor, and it is likely to select points very close to the decision boundary. Furthermore, its structure allows for the triangular inequality to be effectively exploited to reduce the computational effort. The FCNN rule outperformed even here enhanced variants of existing competence preservation methods both in terms of learning speed and learning scaling behavior, and often in terms of the size of the model, while it guaranteed the same prediction accuracy. Furthermore, it was three order of magnitude faster than hybrid instance-based learning algorithms on the MNIST and MIT Face databases and computed a model of accuracy comparable to that of methods incorporating a noise filtering pass.