Some Notes on Twenty One (21) Nearest Prototype Classifiers
Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Non-Euclidean c-means clustering algorithms
Intelligent Data Analysis
Clustering: A neural network approach
Neural Networks
A generalized c-means clustering model using optimized via evolutionary computation
FUZZ-IEEE'09 Proceedings of the 18th international conference on Fuzzy Systems
A study of the robustness of KNN classifiers trained using soft labels
ANNPR'06 Proceedings of the Second international conference on Artificial Neural Networks in Pattern Recognition
Reformulating Learning Vector Quantization and Radial Basis Neural Networks
Fundamenta Informaticae
Hi-index | 0.00 |
First, we identify an algorithmic defect of the generalized learning vector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data. We show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on the sum of squares of distances from an input vector to the node weight vectors. Finally, we propose a new family of models-the GLVQ-F family-that remedies the problem. We derive competitive learning algorithms for each member of the GLVQ-F model and prove that they are invariant to all scalings of the data. We show that GLVQ-F offers a wide range of learning models since it reduces to LVQ as its weighting exponent (a parameter of the algorithm) approaches one from above. As this parameter increases, GLVQ-F then transitions to a model in which either all nodes may be excited according to their (inverse) distances from an input or in which the winner is excited while losers are penalized. And as this parameter increases without limit, GLVQ-F updates all nodes equally. We illustrate the failure of GLVQ and success of GLVQ-F with the IRIS data