On the stationary state of Kohonen's self-organizing sensory mapping
Biological Cybernetics
Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Introduction to the theory of neural computation
Introduction to the theory of neural computation
On the distribution and convergence of feature space in self-organizing maps
Neural Computation
Weight-value convergence of the SOM algorithm for discrete input
Neural Computation
Clustering of the self-organizing map
IEEE Transactions on Neural Networks
CBR System with Reinforce in the Revision Phase for the Classification of CLL Leukemia
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part II: Distributed Computing, Artificial Intelligence, Bioinformatics, Soft Computing, and Ambient Assisted Living
MicroCBR: A case-based reasoning architecture for the classification of microarray data
Applied Soft Computing
A multi-agent system for web-based risk management in small and medium business
Expert Systems with Applications: An International Journal
Hi-index | 0.98 |
We discuss the property of a.e. and in mean convergence of the Kohonen algorithm considered as a stochastic process. The various conditions ensuring a.e. convergence are described and the connection with the rate decay of the learning parameter is analyzed. The rate of convergence is discussed for different choices of learning parameters. We prove rigorously that the rate of decay of the learning parameter which is most used in the applications is a sufficient condition for a.e. convergence and we check it numerically. The aim of the paper is also to clarify the state of the art on the convergence property of the algorithm in view of the growing number of applications of the Kohonen neural networks. We apply our theorem and considerations to the case of genetic classification which is a rapidly developing field.