Flexible data parallel training of neural networks using MIMD-Computers

  • Authors:
  • M. Besch;H. W. Pohl

  • Affiliations:
  • -;-

  • Venue:
  • PDP '95 Proceedings of the 3rd Euromicro Workshop on Parallel and Distributed Processing
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

An approach to flexible and efficient data parallel simulation of neural networks on large scale MIMD machines is presented. We regard the exploitation of the inherent parallelism of neural network models as necessary if larger networks and training data sets respectively are to be considered. Nevertheless it is essential to provide the flexibility for investigating various training algorithms or creating new ones without intimate knowledge of the underlaying hardware architecture and communication subsystem. We therefore encapsulated functional units being substantial with respect to the parallel execution. Based on these components even complex training algorithms can be formulated as a sequential program while the details of the parallelization are transparent. Communication tasks are performed very efficiently by using a distributed logarithmic tree. This logical structure additionally allows a direct mapping of the algorithm on various important parallel architectures. Finally a theoretical time complexity model is given and the correspondence to empirical data is shown.