Self-organizing maps
Generalized relevance learning vector quantization
Neural Networks - New developments in self-organizing maps
Soft learning vector quantization
Neural Computation
On the Generalization Ability of GRLVQ Networks
Neural Processing Letters
Dynamics and Generalization Ability of LVQ Algorithms
The Journal of Machine Learning Research
Adaptive relevance matrices in learning vector quantization
Neural Computation
Analysis of tiling microarray data by learning vector quantization and relevance learning
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Regularization in matrix relevance learning
IEEE Transactions on Neural Networks
Relevance learning in generative topographic mapping
Neurocomputing
Prototype-based classification of dissimilarity data
IDA'11 Proceedings of the 10th international conference on Advances in intelligent data analysis X
Learning relevant time points for time-series data in the life sciences
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Kernel robust soft learning vector quantization
ANNPR'12 Proceedings of the 5th INNS IAPR TC 3 GIRPR conference on Artificial Neural Networks in Pattern Recognition
Local discriminative distance metrics ensemble learning
Pattern Recognition
Border-sensitive learning in kernelized learning vector quantization
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Learning vector quantization for (dis-)similarities
Neurocomputing
Hi-index | 0.00 |
Discriminative vector quantization schemes such as learning vector quantization (LVQ) and extensions thereof offer efficient and intuitive classifiers based on the representation of classes by prototypes. The original methods, however, rely on the Euclidean distance corresponding to the assumption that the data can be represented by isotropic clusters. For this reason, extensions of the methods to more general metric structures have been proposed, such as relevance adaptation in generalized LVQ (GLVQ) and matrix learning in GLVQ. In these approaches, metric parameters are learned based on the given classification task such that a data-driven distance measure is found. In this letter, we consider full matrix adaptation in advanced LVQ schemes. In particular, we introduce matrix learning to a recent statistical formalization of LVQ, robust soft LVQ, and we compare the results on several artificial and real-life data sets to matrix learning in GLVQ, a derivation of LVQ-like learning based on a (heuristic) cost function. In all cases, matrix adaptation allows a significant improvement of the classification accuracy. Interestingly, however, the principled behavior of the models with respect to prototype locations and extracted matrix dimensions shows several characteristic differences depending on the data sets.