Visual reconstruction
Competitive learning algorithms for vector quantization
Neural Networks
Numerical continuation methods: an introduction
Numerical continuation methods: an introduction
Vector quantization and signal compression
Vector quantization and signal compression
About the Kohonen algorithm: strong or weak self-organization?
Neural Networks
The LBG-U Method for Vector Quantization – an Improvement over LBGInspired from Neural Networks
Neural Processing Letters
Graduated Nonconvexity by Functional Focusing
IEEE Transactions on Pattern Analysis and Machine Intelligence
A unified framework for model-based clustering
The Journal of Machine Learning Research
Competitive learning algorithms for robust vector quantization
IEEE Transactions on Signal Processing
Optimal adaptive k-means algorithm with dynamic adjustment of learning rate
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Convergence of the Self-Organizing Map (SOM) and Neural Gas (NG) is usually contemplated from the point of view of stochastic gradient descent. This class of algorithms is characterized by a very slow convergence rate. However we have found empirically that One-Pass realizations of SOM and NG provide good results or even improve over the slower realizations, when the performance measure is the distortion. One-Pass realizations use each data sample item only once, imposing a very fast reduction of the learning parameters that does not conform to the convergence requirements of stochastic gradient descent. That empirical evidence leads us to propose that the appropriate setting for the convergence analysis of SOM, NG and similar competitive clustering algorithms is the field of Graduated Nonconvexity algorithms. We show they can easily be put in this framework.