Adaptive algorithms and stochastic approximations
Adaptive algorithms and stochastic approximations
How receptive field parameters affect neural learning
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Artificial neural networks and their application to sequence recognition
Artificial neural networks and their application to sequence recognition
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Self-organizing maps
On-line learning and stochastic approximations
On-line learning in neural networks
Finite-Sample Convergence Properties of the LVQ1 Algorithm and the Batch LVQ1 Algorithm
Neural Processing Letters
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Large Margin Nearest Neighbor Classifiers
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
NP-completeness of the problem of prototype selection in the nearest neighbor method
Pattern Recognition and Image Analysis
Prototype sample selection based on minimization of the complete cross validation functional
Pattern Recognition and Image Analysis
Hi-index | 0.00 |
This paper introduces a learning strategy for designing a set of prototypes for a 1-nearest-neighbour (NN) classifier. In learning phase, we transform the 1-NN classifier into a maximum classifier whose discriminant functions use the nearest models of a mixture. Then the computation of the set of prototypes is viewed as a problem of estimating the centres of a mixture model. However, instead of computing these centres using standard procedures like the EM algorithm, we derive to compute a learning algorithm based on minimising the misclassification accuracy of the 1-NN classifier on the training set. One possible implementation of the learning algorithm is presented. It is based on the online gradient descent method and the use of radial gaussian kernels for the models of the mixture. Experimental results using hand-written NIST databases show the superiority of the proposed method over Kohonen's LVQ algorithms.