Parallel implementation of self-organizing maps
Self-Organizing neural networks
Data Mining and Knowledge Discovery
Design and Evaluation of a Reconfigurable Digital Architecture for Self-Organizing Maps
MICRONEURO '99 Proceedings of the 7th International Conference on Microelectronics for Neural, Fuzzy and Bio-Inspired Systems
Design, Implementation, and Test of a Multi-Model Systolic Neural-Network Accelerator
Scientific Programming - Parallel Computing Projects of the Swiss Priority Programme
Design on supervised/unsupervised learning reconfigurable digital neural network structure
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Future Generation Computer Systems
Systolic realization of Kohonen neural network
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Convergence rate in intelligent self-organizing feature map using dynamic gaussian function
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
Hi-index | 0.00 |
This paper describes two variants of the Kohonen's self-organizing feature map (SOFM) algorithm. Both variants update the weights only after presentation of a group of input vectors. In contrast, in the original algorithm the weights are updated after presentation of every input vector. The main advantage of these variants is to make available a finer grain of parallelism, for implementation on machines with a very large number of processors, without compromising the desired properties of the algorithm. In this work it is proved that, for one-dimensional (1-D) maps and 1-D continuous input and weight spaces, the strictly increasing or decreasing weight configuration forms an absorbing class in both variants, exactly as in the original algorithm. Ordering of the maps and convergence to asymptotic values are also proved, again confirming the theoretical results obtained for the original algorithm. Simulations of a real-world application using two-dimensional (2-D) maps on 12-D speech data are presented to back up the theoretical results and show that the performance of one of the variants is in all respects almost as good as the original algorithm. Finally, the practical utility of the finer parallelism made available is confirmed by the description of a massively parallel hardware system that makes effective use of the best variant