SOM and neural gas as graduated nonconvexity algorithms

  • Authors:
  • Ana I. González;Alicia D'Anjou;M. Teresa García-Sebastian;Manuel Graña

  • Affiliations:
  • Grupo de Inteligencia Computacional, Facultad de Informática, UPV/EHU, San Sebastián, España;Grupo de Inteligencia Computacional, Facultad de Informática, UPV/EHU, San Sebastián, España;Grupo de Inteligencia Computacional, Facultad de Informática, UPV/EHU, San Sebastián, España;Grupo de Inteligencia Computacional, Facultad de Informática, UPV/EHU, San Sebastián, España

  • Venue:
  • ICCSA'06 Proceedings of the 2006 international conference on Computational Science and Its Applications - Volume Part III
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Convergence of the Self-Organizing Map (SOM) and Neural Gas (NG) is usually contemplated from the point of view of stochastic gradient descent. This class of algorithms is characterized by a very slow convergence rate. However we have found empirically that One-Pass realizations of SOM and NG provide good results or even improve over the slower realizations, when the performance measure is the distortion. One-Pass realizations use each data sample item only once, imposing a very fast reduction of the learning parameters that does not conform to the convergence requirements of stochastic gradient descent. That empirical evidence leads us to propose that the appropriate setting for the convergence analysis of SOM, NG and similar competitive clustering algorithms is the field of Graduated Nonconvexity algorithms. We show they can easily be put in this framework.