The nature of statistical learning theory
The nature of statistical learning theory
Self-organizing maps
Experiments in colour texture analysis
Pattern Recognition Letters
Generalized relevance learning vector quantization
Neural Networks - New developments in self-organizing maps
Soft learning vector quantization
Neural Computation
Learning the Kernel with Hyperkernels
The Journal of Machine Learning Research
Midpoint-Validation Method for Support Vector Machine Classification
IEICE - Transactions on Information and Systems
Distance learning in discriminative vector quantization
Neural Computation
Adaptive relevance matrices in learning vector quantization
Neural Computation
Analysis of tiling microarray data by learning vector quantization and relevance learning
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Soft nearest prototype classification
IEEE Transactions on Neural Networks
Adaptive matrices for color texture classification
CAIP'11 Proceedings of the 14th international conference on Computer analysis of images and patterns - Volume Part II
A comparative study on multiscale fractal dimension descriptors
Pattern Recognition Letters
Robust classification using l2,1-norm based regression model
Pattern Recognition
Texture feature ranking with relevance learning to classify interstitial lung disease patterns
Artificial Intelligence in Medicine
Local discriminative distance metrics ensemble learning
Pattern Recognition
Border-sensitive learning in kernelized learning vector quantization
IWANN'13 Proceedings of the 12th international conference on Artificial Neural Networks: advances in computational intelligence - Volume Part I
Hi-index | 0.00 |
In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can display a tendency towards oversimplification in the course of training. An overly pronounced elimination of dimensions in feature space can have negative effects on the performance and may lead to instabilities in the training.We focus on matrix learning in generalized LVQ(GLVQ). Extending the cost function by an appropriate regularization term prevents the unfavorable behavior and can help to improve the generalization ability. The approach is first tested and illustrated in terms of artificial model data. Furthermore, we apply the scheme to benchmark classification data sets from the UCI Repository of Machine Learning. We demonstrate the usefulness of regularization also in the case of rank limited relevance matrices, i.e., matrix learning with an implicit, low-dimensional representation of the data.