Parallel implementation of backpropagation neural networks on a heterogeneous array of transputers

  • Authors:
  • Shou King Foo;P. Saratchandran;N. Sundararajan

  • Affiliations:
  • Sch. of Electr. & Electron. Eng., Nanyang Technol. Inst.;-;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper analyzes parallel implementation of the backpropagation training algorithm on a heterogeneous transputer network (i.e., transputers of different speed and memory) connected in a pipelined ring topology. Training-set parallelism is employed as the parallelizing paradigm for the backpropagation algorithm. It is shown through analysis that finding the optimal allocation of the training patterns amongst the processors to minimize the time for a training epoch is a mixed integer programming problem. Using mixed integer programming optimal pattern allocations for heterogeneous processor networks having a mixture of T805-20 (20 MHz) and T805-25 (25 MHz) transputers are theoretically found for two benchmark problems. The time for an epoch corresponding to the optimal pattern allocations is then obtained experimentally for the benchmark problems from the T805-20, TS805-25 heterogeneous networks. A Monte Carlo simulation study is carried out to statistically verify the optimality of the epoch time obtained from the mixed integer programming based allocations. In this study pattern allocations are randomly generated and the corresponding time for an epoch is experimentally obtained from the heterogeneous network. The mean and standard deviation for the epoch times from the random allocations are then compared with the optimal epoch time. The results show the optimal epoch time to be always lower than the mean epoch times by more than three standard deviations (3σ) for all the sample sizes used in the study thus giving validity to the theoretical analysis