NIPSOM

  • Authors:
  • Iren Valova;Derek Beaton;Dan MacLean;John Hammond

  • Affiliations:
  • -;-;-;-

  • Venue:
  • The Computer Journal
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work describes a parallel implementation of a distinctly different concept in self-organizing maps (SOMs)—processing of the input as a whole, in parallel, via neurons attracted to cover regions of the input space rather than single input instances. The algorithm is analyzed from the perspective of its parallelism and a simple multi-machine concept is utilized to demonstrate the parallel nature of the neural network architecture. The nature of the classic SOM (Ozdzynski, P., Lin, A., Liljeholm, M. and Beatty, J. (2002) A parallel implementation of Kohonen's self-organizing map algorithm: performance and scalability. Neurocomputing, 44–46, 567–571) algorithm displays some parallel characteristics, but contains several bottlenecks, which prevent a true parallel execution. Most SOM parallel implementations either deal with expensive hardware that requires considerable algorithm adaptation, or run multiple instances of SOM on multiple machines and choose the most relevant, or adopt controller–worker architecture with the controller machine presenting a significant bottleneck in the parallelism. Our algorithm (Valova, I., Szer, D., Gueorguieva, G. and Buer, A. (2005) A parallel growing architecture for self-organizing maps with unsupervised learning. Neurocomputing, 68C, 177–195), which we eventually named ParaSOM, reveals no need for significant adaptation to be ported to parallelism and the analysis and implementations we discuss illustrate this quality. This work presents two designs for parallelizing our algorithm and analyzes both while demonstrating the easiness in the transition to a parallel paradigm. We have illustrated the performance gain of the proposed implementation and compared it with other similar methods, from the perspectives of both parallelism and nature of the mapping algorithm.