Competitive learning algorithms for vector quantization
Neural Networks
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Neural Networks
Expansive and Competitive Learning for Vector Quantization
Neural Processing Letters
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Supervised Neural Gas with General Similarity Measure
Neural Processing Letters
On the Convergence of a Population-Based Global Optimization Algorithm
Journal of Global Optimization
An adaptive incremental LBG for vector quantization
Neural Networks
Competitive learning and soft competition for vector quantizerdesign
IEEE Transactions on Signal Processing
Vector quantization by deterministic annealing
IEEE Transactions on Information Theory
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
The pre-image problem in kernel methods
IEEE Transactions on Neural Networks
Survey of clustering algorithms
IEEE Transactions on Neural Networks
Rival penalized competitive learning for clustering analysis, RBF net, and curve detection
IEEE Transactions on Neural Networks
Hi-index | 0.10 |
In this paper we present a necessary and sufficient condition for global optimality of unsupervised Learning Vector Quantization (LVQ) in kernel space. In particular, we generalize the results presented for expansive and competitive learning for vector quantization in Euclidean space, to the general case of a kernel-based distance metric. Based on this result, we present a novel kernel LVQ algorithm with an update rule consisting of two terms: the former regulates the force of attraction between the synaptic weight vectors and the inputs; the latter, regulates the repulsion between the weights and the center of gravity of the dataset. We show how this algorithm pursues global optimality of the quantization error by means of the repulsion mechanism. Simulation results are provided to show the performance of the model on common image quantization tasks: in particular, the algorithm is shown to have a superior performance with respect to recently published quantization models such as Enhanced LBG [Patane, G., Russo, M., 2001. The enhanced LBG algorithm. Neural Networks 14 (9), 1219-1237] and Adaptive Incremental LBG [Shen, F., Hasegawa, O., 2006. An adaptive incremental LBG for vector quantization. Neural Networks 19 (5), 694-704].